Next Article in Journal
SRG-YOLO: Star Operation and Restormer-Based YOLOv11 via Global Context for Vehicle Object Detection
Previous Article in Journal
Autonomous Gas Leak Detection in Hazardous Environments Using Gradient-Guided Depth-First Search Algorithm
error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detecting Low-Orbit Satellites via Adaptive Optics Based on Deep Learning Algorithms

by
Ahmed R. El-Sawi
1,
Amir Almslmany
2,*,
Abdelrhman Adel
2,
Ahmed I. Saleh
1,
Hesham A. Ali
1 and
Mohamed M. Abdelsalam
3
1
Computers Engineering and Control System Department, Faculty of Engineering, Mansoura University, Mansoura 7660202, Egypt
2
Air Defense College, Egyptian Military Academy, Cairo 4804004, Egypt
3
Faculty of Engineering, Mansoura National University, Gamasa 7731204, Egypt
*
Author to whom correspondence should be addressed.
Automation 2026, 7(1), 14; https://doi.org/10.3390/automation7010014
Submission received: 15 October 2025 / Revised: 8 November 2025 / Accepted: 31 December 2025 / Published: 6 January 2026

Abstract

This research proposes the design and implementation of an adaptive optical system (AOS) for monitoring low-orbit satellites (LOSs) to ensure that they do not deviate from their pre-planned path. This is achieved by designing a telescope with an optical system that contains six mirrors in a regular hexagonal shape; the side length of one mirror is 30 cm, and there is also a spectral analyzer system in the middle to separate the spectra emitted by stars from those reflected from low-orbit satellites. A SwinTrack-Tiny (STT) is used, with modifications using temporal information via insertion. The model incorporates a new purpose-built image update template as a third input to the model and combines the attributes of the new image with the attributes of the primary template via an attention block. To maintain the dimensions of the original model and take advantage of its weights, an attention block with four vertices is used.

1. Introduction

Most studies and explorations in space have been carried out by scientists via numerous ground stations and large balloons launched into the air, which have revealed some of the secrets of the distant atmosphere, encouraging them to develop these balloons and replace them with similar missiles carrying various devices. Scientists have tried to launch these carriers into space such that the collected data could be transmitted to the ground station [1]. This was achieved by Soviet astronomers with the launch of the first artificial satellite (Sputnik-1-) on 4 October 1957, which settled into a 96 min orbit around the Earth. This was followed by the American Pioneer and Explorer satellites (-1-Major and Explorer) in 1958, with which the Van Allen radiation zones surrounding the Earth were discovered.
When scientists saw the great benefits achieved by these small space stations, they developed them, increased their number, and adjusted their orbits to obtain broader and more accurate information about the Sun. These satellites have significant impacts on the Earth’s geosphere and climate [2].
Classification of satellite orbits: After satellites are launched by propellant rockets, they enter their trajectory—which is theoretically similar to the trajectory of planetary satellites, except that it is close to the surface of the Earth—and are subjected to various disturbances that lead to deviation from their trajectory. This necessitates compensating for this deviation, usually using solar energy [3] and programmed mechanical techniques. Thus, their orbits are often elliptical and classified as follows.
Super High Earth Orbit (SHEO): The altitude of this orbit is more than 36,000 km, and its rotation time is more than the duration of the stellar day, which is approximately 23 h, 56 min, and 4 s. These qualities are characteristic of the high Earth orbit, which is also known as the super synchronous orbit [4]. Satellites placed in these orbits are outside the influence of the geomagnetic field (magnetosphere), the weak impact of the flattening of the Earth, and the suppression of the atmosphere, so they are more stable than previous ones and have a longer life. Most of these satellites in these orbits are used for astronomical research [5].
High Earth Orbit (HEO): The height of these orbits is about 36,000 km, where the satellite completes one cycle within 24 h; its speed is equal to the speed of the Earth, so it appears to be constant in the sky. These satellites are used for communications and television broadcasting. Examples include the synchronous, ground, and fixed orbits [6].
Mid-Earth Orbit (MEO): The altitude of this orbit varies between 20,000 and 10,000 km, and the duration of the satellite’s cycle is approximately 12 h. It completes two cycles per day, so it is sometimes called the Semi-Synchronous Orbit. Satellites in this orbit can be observed from the Earth station for two or more hours, which is the duration taken to cross the planetarium which the observer can see. Examples of such satellites include those of the Global Positioning System (GPS) and GLONASS. The satellites of the Global Positioning System are artificially placed at an altitude of almost 20,000 km and orbit the Earth twice a day. They are used for civil and military purposes, as well as determining sunrise and sunset times and their locations [7].
Low Earth Orbit (LEO): The altitude of satellites in such orbits varies between 800 and 300 km, their rotation time is less than 225 min, and their speed reaches up to approximately 7.6 km/s to overcome the force of gravity due to their proximity to the Earth’s surface. Because of this high speed, they cannot be monitored from a ground station for more than 10 min. They rapidly pass through the planetarium that the observer sees. Because of their proximity to the Earth’s surface, these moons are exposed to orbital disturbances caused by the braking force of the atmosphere and the flattening of the Earth, so they are unstable moons and relatively short-lived. Examples include satellites for remote sensing, weather, photography, and reconnaissance [8]. This type is the subject of this study.
Adaptive Optical System (AOS): Optical systems can be divided into conventional and unconventional systems, depending on the nature of their adaptation to the surrounding conditions and the associated effects on image quality. Traditional systems (passive systems) produce images whose quality depends on the type of optical system and the type of surrounding conditions, such as weather conditions, movement, vibration of the system, and movement of the body. One of the most important factors causing wave front disturbance is the difference in the refractive index of the atmospheric layers as a result of the difference in their density. This must be taken into account when using terrestrial astronomical observatories that receive images coming from space through the atmosphere [9].
As for non-traditional systems, called adaptive optical systems (adaptive optical systems), their parameters change depending on the external stimuli surrounding them to maintain image quality. This technology was invented for the purpose of improving observation imagery by reducing disturbances (e.g., turbulence in light waves coming from the source). This technique is used in many devices, such as ground-based and space observers, as well as ordinary and digital cameras. Visual adaptation is carried out by measuring the disturbance in the foreground. These techniques include the use of mirrors or deformable lenses that change the shape of the received wave front to remove any disturbance, as shown in Figure 1, thus producing a high-quality image. Another technique uses small optical parts (segments) that can move on multiple axes and are assembled into an adapted optical system whose parameters change automatically depending on the type of disturbance present in the received wave front. These parts can be mirrors, lenses, or any other part of the system capable of adjusting its parameters [10].
Adaptive visual systems are divided into two types: ordinary adaptive visual systems, which are optical systems that change parameters and self-process images; and active optical systems, which process images in a very short time without the need to move the parts of the system. An adaptive optical system consists of three main parts: 1. a wave front sensor that measures turbulence in a wave; 2. a computer that calculates corrective signals for the wave front based on the sensor data; and 3. corrective devices that are used to determine the wave front’s turbulence equation [11].
These three components are used in astronomical observations, allowing several hundred images to be processed per second. In the simplest case, the corrective devices are made up of one mirror and can be moved on two perpendicular axes, with which the motion of the image caused by atmospheric influences can be equalized with respect to the wind, temperature changes, and changes in air density, and so on. Other second-order corrections include deviations from the focus and actual photo correction, such as a mirror with an elastic surface that can be changed or a mirror made from a crystalline liquid [12]. A flexible mirror can be used to equalize the light wave front incident on the telescope. By controlling a mirror, changes occur in the caliber beam of the laser that the telescope sends into the atmosphere.
The main medium for transmitting light waves to astronomical telescopes is the atmosphere. Since outer space is almost devoid of matter and is at an almost constant temperature, it is considered a homogeneous free medium. The Earth’s atmosphere is a large, nonlinear, and inhomogeneous medium (a nonlinear and anisotropic medium) that is constantly changing in a random way, which affects light as it propagates through it. The model providing a description of the nature of wave front disturbances entering the atmosphere was first proposed by a Russian mathematician named Andrei Kolmogorov (Kolmogorov [13]), which is supported by a variety of experimental measurements and has been widely used in simulations of astronomical vision. Kolmogorov’s theory of turbulence in the atmosphere is based on the assumption that turbulence changes the refractive index, which affects the visual field or the electric field being diffused in the atmosphere [14].
Adaptive optics in reflecting astronomical observatories modulate the curvature of a mirror when changing its position and inclination. Astronomical observatories are built with a large mirror at the top to accumulate sufficient light to capture clear images of celestial bodies. For this reason, certain types of ceramic glass are used. They are characterized by a low coefficient of thermal expansion, and are thin and lightweight, allowing for the formation of an image in a mirror. However, this may cause the mirror to lose some of its balance and change its shape, resulting in low-contrast images that are not fully clear. To correct these poor-quality images, the mirror rests on substrates such as pistons that move via a controller. Small piston units can be raised or lowered such that the differences in the curvature of the mirror’s surface are equalized; thus, the shape of the mirror can be adjusted to produce a clear image [15].
Visual object tracking: Visual object tracking is a field of computer vision that is concerned with determining the status of a target object in each frame of a video. It is a subject of active research due to the many challenges and problems involved in tracking, and it has applications in various fields such as autonomous driving, space, military, entertainment, and security, among others [16].
In recent decades, a large number of stalking algorithms have been developed, most notably NCC, Mean shift, MDNet, MOSSE, and others, which use various techniques to solve stalking problems. Because of the diversity of methods and techniques, we decided to focus on stalking algorithms. Deep learning techniques are used for several reasons: deep learning techniques have recently been widely used in stalking research, and a large amount of data is available to train deep learning models, such as GOT and LASOT 10k. Moreover, deep learning techniques tend to show better performance when compared to traditional techniques.
The emergence of many deep learning applications has made it possible to perform real-time tasks efficiently [17]. This study examines the satellites of the Egyptian Space Agency in LEO to determine the amount of deviation caused by space debris and ground control errors. This study is divided into two parts: the first addresses adaptive optics and the proposed optical spectral analyzer, while the second focuses on computer vision and deep learning algorithms to develop an integrated system which is capable of tracking targets in orbit (LEO).
The remainder of this paper is organized as follows: Section 2 presents the methodology, including the design and implementation of the proposed optical adaptive optics system and the development of a stalking algorithm. Section 4 presents the results and testing of the comprehensive optical hardware and software algorithms, followed by the conclusions.

2. Methodology of Proposed System

In this section, we review the design of the adaptive optics of the proposed system, with the following steps: 1. Designing and simulating an adapted optical system, a parabola-shaped ground-based reflecting telescope consisting of six hexagonal mirror segments forming a segmented primary mirror and a small hexagonal shape that can move along three axes. 2. Simulating a turbulent wave front entering the system by using an imaginary surface that distorts the waveform. 3. Using the ZEMAX program to change the parameters of the optical system, such as the focal length and radius, to correct wave front deformation and obtain a high-quality image. 4. Comparing the image formed by the adapted optical system to the conventional (non-adapted) system using the image evaluation tools provided by the ZEMAX program, and developing a tracking algorithm. This research focuses on single-object tracking without prior information about the target, known as general object tracking. In this context, we define the tracking process as estimating the status of the target object in successive frames of a video based only on its appearance in the first frame.
The AOS-corrected images are used directly as input to the SwinTrack-Tiny model. The deep learning tracker was trained on GOT-10k data but evaluated on the AOS-enhanced image sequence, ensuring that temporal template updates are applied to aberration-free frames.
The condition of the object can be represented by a rectangle surrounding it in each frame of the video, referred to as the bounding box; it is manually selected in the first frame, and what is inside the rectangle is the template. After selecting the template in the first frame, a new frame from the video is chosen, and the task is to estimate the surrounding rectangle for the object in the new frame. For this purpose, we truncate the area around the target location in the previous frame by a certain size (for example, four times the previous surrounding rectangle), and this area is referred to as the search window.

2.1. The Optical System Design and Implementation

Optical design refers to the calculation of variable optical parameters that meet a range of performance requirements and constraints, including cost and manufacturing constraints. Performance parameters include types of surface shapes (e.g., spherical, aspheric, plane), the radius of the curvature, the distance to the next surface (thickness), the type of materials used (glass, plastic, or mirrors), and the diameter of the input hole (entrance pupil diameter). The requirements for optical design are as follows [17]:
  • Optical performance, which includes determining special parameters—such as the design size, location, and alignment—that influence the overall shape of the design. It also includes the evaluation of the quality of the optical system, which is assessed by a set of analysis methods provided by optical design programs, with the most important analysis methods being the optical transfer function, optical transition function, enclosed energy (accumulated energy), and the ray propagation diagram of a point object (spot diagram).
  • Material requirements (material applications), such as weight, static size, dynamic size, and center of gravity, and comprehensive configuration requirements.
  • Environmental requirements (environmental applications such as temperature, pressure, vibration, humidity, and radiation intensity of the system). Design limitations can include the edge thickness of the lens or the mirror, the minimum and maximum distances between the lenses, the maximum angles of entry and exit of rays to and from the optical system, and the type of glass from which the refractive index and dispersion are determined [18].
The optical system of the light-reflecting telescope consists of a set of six hexagonal segmented mirrors designed to form a parabolic mirror that reflects light toward the focal point in front of the mirror. The hexagonal shape of the mirrors gives freedom of movement to these parts along three axes (three axes of freedom) and provides a packing factor of 100%, as shown in Figure 2.
A spectral analyzer was placed at the focal plane to discriminate star and satellite signals. The analyzer employs a 450–850 nm diffraction grating module calibrated with a reference star to ensure spectral purity.
The following equation can be used to represent the picture plane intensity distribution [19]:
I d ( x , y ) = O ( x , y ) × p s f d ( x , y )
where the variables in the spatial domain are x and y. The 2D object’s distribution function is represented by the symbol O(x, y). The picture intensity distribution on the defocus planes is represented by I d (x,y). On the defocus plane, the point spread function (PSF) is represented as p s f d ( x , y ) The relationship in the frequency domain can be expressed as follows using Fourier optics:
I d ( u , v ) = O ( u , v ) O T F d ( u , v ) .
One way to express the generalized pupil function is
P ( x , y ) = j = 1 N   P j x j , y j   circle   x x s j y y s i D / 2 e x p i Δ ϕ j x j , y j ,
where   P j x j , y j = 1 ,   i n s i d e   t h e   j t h   h e x a g o n 0 ,   o u t s i d e   t h e   j t h   h e x a g o n
For each hexagon sub-mirror of the segmented telescope, the pupil is denoted by P (xj, yj) in the equation above. The pupil center coordinates of each sub-mirror are represented by xj, y. The circular domain function characterizes the mask’s form. The center coordinates of every mask are xsj, ysj. A mask is used to alter the pupil’s form; if the mask is positioned at the exit pupil plane, the non-opaque part of the mask controls the pupil’s shape. The information in the frequency domain may be modified to separate the impacts of tip/tilt faults and piston errors by adjusting the pupil’s shape. The aberration of the sub-mirror is denoted as ΔΦ; (x, y), which may be written as Zernike polynomials [20]:
Δ ϕ j x j , y j = 2 π λ α j 1 Z j 1 + α j 2 Z j 2 + α j 3 Z j 3 .
The PSF is recorded in the optical system using a generalized pupil function and an inverse Fourier transform:
p s f ( x , y ) = | F t [ P ( x , y ) ] | 2 = p s f s u b ( x , y ) s 1 N   e x p i 2 π λ f x x s + y y s 2
The PSF of a single sub-aperture is p s f s u b ( x , y ) . The complex O T F f x , f y may be obtained by taking the 2D Fourier transform of p s f x , y :
OTF f x , f y = F t p s f x , y = 1 N   O T F   F sub f x , f y + O T F   F sub f x , f y m = 1 N 1   n = m + 1 N   δ f x ± x s m x s n λ f , f y ± y s m y s n λ f
The adapted optical system works to give flexibility in the main parameters of the design to be changeable in proportion to the distorted wave coming to it, and it corrects the distortion by changing the parameters. This is achieved by making the hexagonal mirrors move diagonally (decenter movement) to diverge and move closer to the center of the system to give a field of view (field of view) variant of the system.
The mirrors also move axially (tilt movement)—i.e., on the axes of their three hexagonal shapes—to give a variable ball radius of the system. The average wavelength of visible light (550 nm) was used in a direction perpendicular to the system, with an angle of incidence, followed by multiple uses of different angles of incidence to measure the effect of the angle of incidence on the quality of the image of an adapted optical system, as shown in Figure 3(1).
The location of the actuators behind the mirrors in the proposed system improves the motor performance and accuracy, potentially increasing the power and speed of the micro-motor, which may provide additional benefits for AOS applications. Piezoelectric actuators could be integrated along the surface of the adaptive mirror in the form of an equilateral triangle, which can adjust the focal length of the adaptive optical system. In this concept, the actuators located below or on the back of the DM are lightweight and very efficient at relatively low power, since their small mass and high dynamic performance do not have frictional force. Figure 3(2) shows the shape of the deformation resulting from stirring; the principal regulation of the closed-loop system relies on a leaky integrator.
The voltages were directly reconstructed from the measured slopes utilizing a reconstruction matrix derived from the truncated SVD technique. The condition number was established at an RMS error = 1.7746 × 10−0.8 to optimize accuracy and robustness. The tracker was constructed using a combination of a hexapod and a PDSM-241. The hexapod executed low-frequency tip/tilt adjustment at approximately 1 Hz, but the PDSM managed high-frequency correction at a rate of 2000 Hz, making it especially effective for brilliant stars.
In Figure 3, the positions of the moving system behind the mirrors have been replaced. Figure 4 illustrates the moving angles for each mirror separately, and Figure 5 shows the base of the actuator motor in the proposed optical system.
The mirror included three actuators positioned 120 degrees apart and divided by h′. The mirror’s side length for PSMT is 300 mm, h is 235.2 mm, and h′ is 271.58 mm, where the actuators are the linear displacements of a certain section. The formula below provides the segment’s tip, tilt, and piston [21]:
x   t i l t = 2 Δ z 1 ( Δ z 2 + Δ z 3 ) 2 · h ; y   t i l t = Δ z 2 Δ z 3 2 h p i s t o n = ( Δ z 1 + Δ z 2 + Δ z 3 ) 3
Equation (7) was derived following the segmented parabolic reflector adjustment method described by Wang et al. [21]. The rotation matrix transformation between the actuator displacement vectors and mirror surface orientation was applied to compute the tip, tilt, and piston parameters for each segment.
Figure 6 shows (1) the path of the incident rays on the proposed optical system (2) and the mechanical design of the proposed adaptive optical system head. The following is a summary of this architecture: a main hexagonal mirror with six segments. A micrometric movement is presented in each section to enable precise placement.
An additional monolithic mirror is included, along with a deployable front chamber that can be rolled up for launch; once unrolled, it may be used to fine-tune the secondary mirror’s location in space, making adjustments at the entrance pupil and at the intermediate and exit pupil planes. Segmented mirrors are used for correction in both scenarios. The mirror primarily corrects the line of sight in the intermediate pupil and the nonmetric defaults in the exit pupil after in-flight phase retrieval algorithm analysis. The mirrors are made of carbon and ceramic materials, which are highly stable, as summarized in Table 1.
The optical arrangement determines how the suggested structure will be configured. An alt-azimuthal mount will be used in the proposed system to maintain accurate tracking and pointing at wind speeds of up to 20 m/s. The suggested frame, which is made of extremely rigid steel, will maintain the optical components’ precise alignment regardless of the telescope’s orientation. All components of the mechanical system that are exposed to sunlight will be coated using a white TiO2 paint to reduce thermal deformation. Figure 7 displays the mechanical system setup for the suggested system, which consists of the elevation portion and the azimuth part, which are the two primary components. The mechanical system’s overall height.
The location of the hexagonal mirrors is as follows: the central piece (central segment) is placed at the location (x) 210 mm, while the second piece is adjacent to the central one, as shown in the table. The one at x = −270 mm and z = 209 mm is placed at the center segment, so the six pieces are arranged similarly to form the hexagonal telescope. Table 2 shows the location coordinates of all six hexagonal mirrors.
The mechanical design depends on the durability of the materials used to ensure the stability of the mirrors and the absence of vibrations, which affect the formed image. The mechanical system also contains a moving system that directs the optical system in the desired direction. Figure 8 presents the implementation of the proposed optical system, and Figure 9 presents the proposed optical telescope in an optical protective fiber dome.

2.2. The Proposed Deep Learning Model

Most modern pursuits use only spatial information, such as the appearance of the target, as in Pig Track and others. However, there have been some attempts to take advantage of temporal information, as in the STARK algorithm, by inserting an image. The SwinTrack algorithm, which is the basic model used in our research, relies only on the template taken from the first frame as the first input and the search window as the second input. These changes are not taken into account.
Unlike STARK, which fuses temporal features through end-to-end retraining, our model employs a lightweight attention fusion block for online template updates. This achieves temporal robustness without increasing inference complexity.
The basic idea of this research is to take advantage of temporal information, as in the [22] STARK algorithm, to modify the [23] SwinTrack model. The image of the object generated by stalking, referred to as the dynamic template or the new template, is taken as the third input to the algorithm. As shown in Figure 10 and Figure 11, which show the modification we made to the new pig backbone algorithm via the template, our algorithm extracts the attributes of the Pig Track. Thus, the attributes of the primary template and the attributes of the new template need to be merged and processed. We had several options to integrate the attributes of the new template.
The first option, as in STARK supplements and Pig Track supplements, is to concatenate the attributes of the three inputs into one string and use it as an input to the model, enabling it to capture both temporal and spatial information. This option requires an adjustment in the number of PE spatial coding weights, and therefore we will have to train these new weights with full-model training, with 23 million weights for the Pig Track-T10 model. Using our GPU NVIDIA GeForce 2080 Ti and 300 epochs, as in the original model trained on the [24] Got10k dataset, this process may take more than a month. Therefore, a second method, in which we take advantage of the weights of the original, pre-trained model, is needed. The second option is to merge and modify the attributes of the primary template with the attributes of the template.
This method preserves the dimensions of the encoder’s input. The method proposed here is to apply basic dependency in the converter, which is the attentional dependency. On the one hand, the follower of attention ensures that the dimensions are preserved. By choosing the second option, we used the crosscutting attention follower by choosing the attributes of the modified query template and the primary template as the key and value, so the input can be determined according to the following equations [25].
Equations (8) and (9) adapt the multi-object motion estimation relations introduced by Fang et al. [25]; while the original work addresses relative-depth estimation, we reformulated its attention weight expression to define temporal correlation between the primary and updated templates:
Multi   Head   ( Q , K , V ) = Concat head 1 , ,   head h W O
head i = A t t e n t i o n Q W i Q , K W i K , V W i V
where q is a value of IUO, and p is the output of the classification grid expressing the probability of the existence of the object. The predicted perimeter rectangle is the real perimeter rectangle in the training ground data truth.
V F L ( p , q ) = q ( q l o g ( p ) + ( 1 q ) l o g ( 1 p ) ) q > 0 α p γ l o g ( 1 p ) q = 0
L cls = V F L ( p , I O U ( b , b ^ ) )
In order to train the regression network, we use the error follower generalized IOU according to the following equation:
L reg = j   1 q > 0 p L G I O U b j , b ^
For 0–1 = IOU, negative samples representing the background are ignored during training, and more importance is given to samples with a high probability by weighing the error GIOU. As in the STARK algorithm, an error follower consisting of the sum of the two previous weighted error followers is used to train the overall model. The weights of the model are determined using an algorithm.

3. Results and Discussion

In these experiments, we chose to train the model on the 10-GOT training dataset for the stalking process. GOT stands for Generic Object Tracking, and as it consists of more than 1000 images, it is called 10-GOT. The dataset contains more than 105 million defined perimeter rectangles and has a total size exceeding 60 GB. The videos are divided into 563 object categories and 87 movement patterns, aiming to cover as many real-life challenges as possible. Figure 12 shows some samples from the 10-GOT dataset.
The 10-GOT dataset uses the one-shot protocol: there is no intersection between the training data and the test data in terms of the object class, except for space impurities. Figure 13 shows the number of items and the number of videos in each collection, where the collection was tested on 420 videos, consisting of 84 object categories and 31 movement categories.
As mentioned, the basic model in our research is Pig Track, which was used in all our tests. Pig Transformer-Tiny, which uses the current Pig Track-Tiny, was the chosen model. We used the weights of the pre-trained model on the GOT-10k dataset only. The model was trained in order to add a self-attention block to combine the attributes of the primary template with the new template, as mentioned earlier. In this section, we discuss the training and test results of two experiments during the training phase; the primary template, the new template, and the search window were randomly selected from the real values of the training set from one video, as in the STARK algorithm.
To generate error curves and track system performance during training, we used the Wandb library. Figure 14 shows the sample number on the horizontal axis, while the epoch number is on the vertical axis. It shows the values of the learning rate and training rate during the training process. Figure 15 expresses the change in the error. We observe a decrease in the mean error curves due to the change in the weights of the model during training on the dataset.
The model was tested on the validation set after each epoch during the training process. Figure 16, Figure 17 and Figure 18 show the error curves of the classification and regression networks and the total error curve. As in the mean error curves during training, the mean error of the model on the validation data decreases after each epoch.
In the second experiment, a cross-sectional attention block with eight heads was implemented. The weights of the trained model in the first experiment were used as elementary weights, and during training, the weights were frozen except for the block weights. Training was performed using the GPU NVIDIA GeForce 3060 Ti, with 9 epochs, a batch size of 1, and selective sampling.
A reduction in RMS wave front error from 1.77 × 10−3 to 8.2 × 10−4 resulted in a 1.6% improvement in SR0.5, demonstrating that enhanced optical correction yields measurable tracking accuracy gains.
We present the results of the model testing for the previous two experiments in the validation group and test group. Since we do not have the real values of the test set and the test set does not intersect with the training and validation sets in terms of the purpose, model evaluation on the test set will be the most important because it determines the generalizability of the model. Table 3 and Table 4 show the test results on the 10-GOT validation set for the basic algorithm, Pig Track-Tiny, and the two experiments described earlier. Since we added an attention follower with four vertices and temporal information, we coded the model in the first experiment using SwinTrack-t4 and the model of the second experiment using 18 SwinTrack, including an attention follower with eight heads.
From Table 4, comparing the previous algorithms in terms of speed (measured in FPS, frames per second), we observe that the attention block with template updates every 30 frames did not affect the speed for four heads, but updating every 10 frames reduced the speed by 8 FPS when using eight heads. The performance did not improve in the test on the validation data; however, when comparing the algorithms on the test set via the GOT-10k server, the performance in both experiments improved. In the first SwinTrack-t4 experiment, the percentage increased by 0.8 and 1.6% for SR0.5. The difference in the results for the test and validation groups can be explained by the overlap between the training group and the validation group.
The original model was trained for 300 epochs, while our first test model was trained for 40 epochs, and the second test model for only 9 epochs. This explains the underfitting on the training set, which overlaps with the validation set. At the same time, the weights of the basic model were frozen, and only the weights of the attention block were adjusted in the modified model, explaining the improved performance on the test set, which differs from the training and validation data. In the second 18-Pig Track experiment, the performance improved by 0.2 for AO and by 0.6 for SRos; this slight improvement in performance can be explained by the fact that the model was not sufficiently trained, as it was trained only for 9 epochs due to technical problems with the NVIDIA GeForce 2080 GPU.
It is expected that the second model’s performance will improve even more if the model is trained for longer. In both experiments, we observe a decrease in the SR0.75 value by 1.6% for the first model, and by 1.2 for the second model; this means that the tracking accuracy is lower than the basic model, but the number of frames being tracked is greater. Thus, updating the target image and combining it with the primary images in such a way may make the supplements more robust but less accurate.
As for the evaluation curves, Figure 19 and Figure 20 show the success and accuracy curves for the three models mentioned earlier. For the validation group, where the SwinTrack-18 model is green, the SwinTrack-t4 is blue, and the original SwinTrack-Tiny is orange, the performance of the modified models did not improve, which explains the presence of the curves of the two modified models below the original SwinTrack-Tiny model.
Using the images taken from the proposed system, we tested the previous algorithms on several videos from the validation group. The tracking results are shown in the following images, where the color of the rectangles corresponds to a specific algorithm as follows:
  • Large objects in green are close to the observatory target as they pass through the orbit assigned to them according to the Earth’s field.
  • Small objects in red are far from the observatory target as they pass through the orbit assigned to them according to the Earth’s field.
Figure 21 and Figure 22 show examples of stalking with similar targets.

4. Conclusions

The proposed system successfully fulfills the requirements for detecting low-orbit satellites using adaptive optics based on deep learning algorithms. A six-mirror telescope with a movable base was designed and implemented to adjust the focal length of the system as a whole, with an optical–mechanical system supporting the optical design of the proposed telescope. STT algorithms were used for image processing. The original model showed improved performance when testing the models using test data, while the second modification uses eight heads in the attention block. The final results of the research are as follows: Each rectangle color corresponds to an algorithm. Large objects in green are close to the observatory target as they pass through the orbit assigned to them, according to the Earth’s field. The algorithms were tested on images captured with the proposed system, and we tested the algorithms on several videos from the validation set. Small targets in red are very far from the observatory target as they pass through the orbit assigned to them by the Earth’s field. One of the most important applications of this proposed system is the detection of military satellites for information collection.

Author Contributions

Methodology, A.R.E.-S. and A.A. (Amir Almslmany); validation, A.A. (Abdelrhman Adel); formal analysis, A.I.S.; software, H.A.A.; results, M.M.A.; writing and project administration, M.M.A.; funding acquisition, A.R.E.-S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AOSAdaptive Optical System
LOSLow-Orbit Satellite
STTSwinTrack-Tiny
SHEOSuper High Earth Orbit
HEOHigh Earth Orbit
MEOMid-Earth Orbit
GPSGlobal Positioning System
LEOLow Earth Orbit
NCCNormalized Cross-Correlation
MDNetMulti-Domain Network
MOSSEMinimum Output Sum of Squared Error
GOTGeneric Object Tracking
LaSOTLarge-scale Single Object-Tracking

References

  1. Hauser, D.; Abdalla, S.; Ardhuin, F.; Bidlot, J.-R.; Bourassa, M.; Cotton, D.; Gommenginger, C.; Evers-King, H.; Johnsen, H.; Knaff, J.; et al. Satellite remote sensing of surface winds, waves, and currents: Where are we now? Surv. Geophys. 2023, 44, 1357–1446. [Google Scholar] [CrossRef]
  2. Shavers, M.; Semones, E.; Tomi, L.; Chen, J.; Straube, U.; Komiyama, T.; Shurshakov, V.; Li, C.; Rühm, W. Space agency-specific standards for crew dose and risk assessment of ionizing radiation exposures for the International Space Station. Z. für Med. Phys. 2024, 34, 14–30. [Google Scholar] [CrossRef] [PubMed]
  3. Pilkevych, I.A.; Bespalko, I.A.; Naumchak, L.M.; Pekariev, D.V. An analysis of the approach to the features of satellites classification determining based on modeling of linguistic variables and membership functions. In Doors; Edge Computing Workshop: Zhytomyr, Ukraine, 2024; pp. 52–59. [Google Scholar]
  4. Dalal, S.; Rescigno, F.; Cretignier, M.; John, A.A.; Majidi, F.; Malavolta, L.; Mortier, A.; Pinamonti, M.; Buchhave, L.A.; Haywood, R.D.; et al. Trio of super-Earth candidates orbiting K-dwarf HD 48948: A new habitable zone candidate. Mon. Not. R. Astron. Soc. 2024, 531, 4464–4481. [Google Scholar] [CrossRef]
  5. Biasiotti, L.; Simonetti, P.; Vladilo, G.; Ivanovski, S.; Damasso, M.; Sozzetti, A.; Monai, S. Potential climates and habitability on Gl 514 b: A super-Earth exoplanet with high eccentricity. Mon. Not. R. Astron. Soc. 2024, 530, 4300–4316. [Google Scholar] [CrossRef]
  6. Zhang, K.; Guo, H.; Jiang, D.; Han, C.; Chen, G. Preliminary Exploration of Coverage for Moon-Based/HEO Spaceborne Bistatic SAR Earth Observation in Polar Regions. Remote Sens. 2024, 16, 2086. [Google Scholar] [CrossRef]
  7. Voicu, A.M.; Bhattacharya, A.; Petrova, M. Handover Strategies for Emerging LEO, MEO, and HEO Satellite Networks. IEEE Access 2024, 12, 31523–31537. [Google Scholar] [CrossRef]
  8. Zea, L.; Warren, L.; Ruttley, T.; Mosher, T.; Kelsey, L.; Wagner, E. Orbital Reef and commercial low Earth orbit destinations—Upcoming space research opportunities. Npj Microgravity 2024, 10, 43. [Google Scholar] [CrossRef] [PubMed]
  9. van Kooten, M.A.; Jackson, K.; Dunn, J.; Chapin, E.; Steinbring, E.; Veran, J.-P.; Lardière, O.; Kerley, D.; Kumar, T.; Andersen, D.R.; et al. Adaptive optics telemetry tools for REVOLT: A deep dive into telemetry. In Adaptive Optics Systems IX; SPIE: Bellingham, WA, USA, 2024; pp. 1402–1410. [Google Scholar]
  10. Patel, D.; Diab, M.; Cheriton, R.; Taylor, J.; Rojas, L.; Vachon, M.; Xu, D.-X.; Schmid, J.H.; Cheben, P.; Janz, S.; et al. End-to-end simulations of photonic phase correctors for adaptive optics systems. Opt. Express 2024, 32, 27459–27472. [Google Scholar] [CrossRef] [PubMed]
  11. Quirós-Pacheco, F.; Bouchez, A.; Plantet, C.; Xin, B.; Molgo, J.; Schoenell, W.; Puglisi, A.; Rossi, F.; Schurter, P.; Haddad, J.P.; et al. The Giant Magellan Telescope’s high contrast adaptive optics testbed: NGAO wavefront sensing and control laboratory results. In Adaptive Optics Systems IX; SPIE: Bellingham, WA, USA, 2024; pp. 498–518. [Google Scholar]
  12. Hampson, K. Adaptive optics correctors. In Handbook of Adaptive Optics; CRC Press: Boca Raton, FL, USA, 2024; pp. 20–37. [Google Scholar]
  13. Christianto, V.; Smarandache, F. Acts Chapter 29: Art and Science and Theology in Dialogue; Infinite Study: Paris, France, 2024. [Google Scholar]
  14. Moon, B.; Poletti, M.; Roorda, A.; Tiruveedhula, P.; Liu, S.H.; Linebach, G.; Rucci, M.; Rolland, J.P. Alignment; calibration, and validation of an adaptive optics scanning laser ophthalmoscope for high-resolution human foveal imaging. Appl. Opt. 2024, 63, 730–742. [Google Scholar] [CrossRef] [PubMed]
  15. Hong, L.; Yan, S.; Zhang, R.; Li, W.; Zhou, X.; Guo, P.; Jiang, K.; Chen, Y.; Li, J.; Chen, Z.; et al. One tracker: Unifying visual object tracking with foundation models and efficient tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 19079–19091. [Google Scholar]
  16. Kazi, K.S. Computer-Aided Diagnosis in Ophthalmology: A Technical Review of Deep Learning Applications. In Transformative Approaches to Patient Literacy and Healthcare Innovation; IGI Global Scientific Publishing: Hershey, PA, USA, 2024; pp. 112–135. [Google Scholar]
  17. Zhao, G.; Zhu, J. Off-axis zoom optical systems based on mirror rotation and their design method. Opt. Laser Technol. 2024, 177, 111031. [Google Scholar] [CrossRef]
  18. Lester, A. Material Reflections: Early Modern Magicians’ Mirrors in Performance Documents. Shakespear. Bull. 2024, 42, 163–182. [Google Scholar] [CrossRef]
  19. Xi, M.; Liu, H.; Li, D.; Wang, Y. Intensity response model and measurement error compensation method for chromatic confocal probe considering the incident angle. Opt. Lasers Eng. 2024, 172, 107858. [Google Scholar] [CrossRef]
  20. Sauniere, L.; Gillard, W.; Zoubian, J. Decoding optical aberrations of low-resolution Instruments from PSFs: Machine learning and Zernike polynomials perspectives. In Space Telescopes and Instrumentation 2024: Optical, Infrared, and Millimeter Wave; SPIE: Bellingham, WA, USA, 2024; pp. 1262–1274. [Google Scholar]
  21. Wang, F.; Kang, Y.; Guo, F. Segmented parabolic adjustment of the FAST reflector utilizing spatial coordinate rotation transformation. Meas. Sci. Technol. 2024, 35, 106009. [Google Scholar] [CrossRef]
  22. Jeffries, C.; Acuña, R. Detection of streaks in astronomical images using machine learning. J. Artif. Intell. Technol. 2024, 4, 1–8. [Google Scholar] [CrossRef]
  23. Chatterjee, S.; Kudeshia, P.; Kollo, N.; Agowun, M.A.; Peethambaran, J.; Akiyama, Y. SatStreaks: Towards Supervised Learning for Delineating Satellite Streaks from Astronomical Images. In Proceedings of the Conference on Robots and Vision, Guelph, ON, Canada, 28–31 May 2024. [Google Scholar]
  24. Vitolo, P.; Fasolino, A.; Liguori, R.; Di Benedetto, L.; Rubino, A.; Licciardo, G.D. Real-time on-board satellite cloud cover detection hardware architecture using spaceborne remote sensing imagery. In Real-Time Processing of Image, Depth, and Video Information; SPIE: Bellingham, WA, USA, 2024; pp. 120–126. [Google Scholar]
  25. Fang, L.; Albadawi, M.; Dolereit, T.; Kuijper, A.; Matthias, V. Fish Motion Estimation Using ML-based Relative Depth Estimation and Multi-Object Tracking. J. WSCG 2024, 32, 51–60. [Google Scholar] [CrossRef]
Figure 1. General model of an adaptive optical system.
Figure 1. General model of an adaptive optical system.
Automation 07 00014 g001
Figure 2. Six-segmented mirror of optical proposed model.
Figure 2. Six-segmented mirror of optical proposed model.
Automation 07 00014 g002
Figure 3. (1) The location of the actuators behind the mirrors, (2) the shape of the deformation resulting from stirring.
Figure 3. (1) The location of the actuators behind the mirrors, (2) the shape of the deformation resulting from stirring.
Automation 07 00014 g003
Figure 4. (1) The moving angles of a single mirror in the actuator motor, (2) the 3D CAD of the locations of the piezoelectric actuators installed in the adaptive mirror base of the proposed optical system.
Figure 4. (1) The moving angles of a single mirror in the actuator motor, (2) the 3D CAD of the locations of the piezoelectric actuators installed in the adaptive mirror base of the proposed optical system.
Automation 07 00014 g004
Figure 5. The base of the actuator motor in the proposed optical system.
Figure 5. The base of the actuator motor in the proposed optical system.
Automation 07 00014 g005
Figure 6. (1) The path of the incident rays on the proposed optical system; (2) the mechanical design of the proposed adaptive optical system head.
Figure 6. (1) The path of the incident rays on the proposed optical system; (2) the mechanical design of the proposed adaptive optical system head.
Automation 07 00014 g006
Figure 7. (1) The OPTO—MECHANICAL design of the proposed system; (2) the side view shows the proposed optical system including adaptive optical mirrors.
Figure 7. (1) The OPTO—MECHANICAL design of the proposed system; (2) the side view shows the proposed optical system including adaptive optical mirrors.
Automation 07 00014 g007
Figure 8. Implementation of the proposed optical system.
Figure 8. Implementation of the proposed optical system.
Automation 07 00014 g008
Figure 9. The proposed optical telescope in an optical protective fiber dome.
Figure 9. The proposed optical telescope in an optical protective fiber dome.
Automation 07 00014 g009
Figure 10. A diagram of the proposed deep neural network model.
Figure 10. A diagram of the proposed deep neural network model.
Automation 07 00014 g010
Figure 11. The suggested detection model architecture.
Figure 11. The suggested detection model architecture.
Automation 07 00014 g011
Figure 12. Samples from the dataset 10-GOT.
Figure 12. Samples from the dataset 10-GOT.
Automation 07 00014 g012
Figure 13. The number of items of objects and movement in each part of the GOT-10k set.
Figure 13. The number of items of objects and movement in each part of the GOT-10k set.
Automation 07 00014 g013
Figure 14. (A) The epoch number is a continuation of the training sample number. (B) The training rate as a function of the training sample number.
Figure 14. (A) The epoch number is a continuation of the training sample number. (B) The training rate as a function of the training sample number.
Automation 07 00014 g014
Figure 15. (A) The rating grid error during training. (B) The regression network error during training.
Figure 15. (A) The rating grid error during training. (B) The regression network error during training.
Automation 07 00014 g015
Figure 16. (A) The classification network error in the validation set. (B) The regression network error in the validation set.
Figure 16. (A) The classification network error in the validation set. (B) The regression network error in the validation set.
Automation 07 00014 g016
Figure 17. (A) The total error during training. (B) The total error in the validation group.
Figure 17. (A) The total error during training. (B) The total error in the validation group.
Automation 07 00014 g017
Figure 18. (A) The total error during training. (B) The total error in the validation group.
Figure 18. (A) The total error during training. (B) The total error in the validation group.
Automation 07 00014 g018
Figure 19. The success curve of the three models: blue is the original model, green is the first experiment with an attention block with 4 heads, and orange is the second experiment with 8 heads.
Figure 19. The success curve of the three models: blue is the original model, green is the first experiment with an attention block with 4 heads, and orange is the second experiment with 8 heads.
Automation 07 00014 g019
Figure 20. The accuracy curve of the three models.
Figure 20. The accuracy curve of the three models.
Automation 07 00014 g020
Figure 21. An example of stalking with similar targets.
Figure 21. An example of stalking with similar targets.
Automation 07 00014 g021
Figure 22. An example of stalking with similar targets.
Figure 22. An example of stalking with similar targets.
Automation 07 00014 g022
Table 1. Six-segmented mirror of optical proposed model.
Table 1. Six-segmented mirror of optical proposed model.
DescriptionRequirement
Mass<39 kg
Nominal Operating Temperature45 K
Conic−0.9967 +/− 0.0005
Hexapod motion6 degrees of freedom
Radius of Curvature156.722 mm
Absolute Error+/−1.0 mm from nominal
Fabrication Matching+/−0.150 mm from average of 18
Table 2. Location coordinates of all 6 hexagonal mirrors.
Table 2. Location coordinates of all 6 hexagonal mirrors.
Number of MX PositionY PositionDistance from the Reference Point
15400206.550
2405−233.82062.094
3270467.6402063.550
40233.889209.889
5−2700209.889
6−405233.889209.889
Table 3. Test results of models for the test—GOT-10K group.
Table 3. Test results of models for the test—GOT-10K group.
SwinTrack-TinySwinTrack-Tiny 4SwinTrack-Tiny 8
A0: 69.670.469.8
SR0.50: 79.5 81.180.3
SR0.75: 64.8 63.2 63.6
Table 4. Validation results for the test—GOT-10k group.
Table 4. Validation results for the test—GOT-10k group.
SwinTrack-TinySwinTrack-Tiny 4SwinTrack-Tiny 8
Update interval-1030
IOU threshold--0.8
Average overlap83.280.781.3
SR0.593.290.891.6
SR0.7581.8 78.1 80
FPS88 72 88
PARAM’s (M)22.7 23.3 23.3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

El-Sawi, A.R.; Almslmany, A.; Adel, A.; Saleh, A.I.; Ali, H.A.; Abdelsalam, M.M. Detecting Low-Orbit Satellites via Adaptive Optics Based on Deep Learning Algorithms. Automation 2026, 7, 14. https://doi.org/10.3390/automation7010014

AMA Style

El-Sawi AR, Almslmany A, Adel A, Saleh AI, Ali HA, Abdelsalam MM. Detecting Low-Orbit Satellites via Adaptive Optics Based on Deep Learning Algorithms. Automation. 2026; 7(1):14. https://doi.org/10.3390/automation7010014

Chicago/Turabian Style

El-Sawi, Ahmed R., Amir Almslmany, Abdelrhman Adel, Ahmed I. Saleh, Hesham A. Ali, and Mohamed M. Abdelsalam. 2026. "Detecting Low-Orbit Satellites via Adaptive Optics Based on Deep Learning Algorithms" Automation 7, no. 1: 14. https://doi.org/10.3390/automation7010014

APA Style

El-Sawi, A. R., Almslmany, A., Adel, A., Saleh, A. I., Ali, H. A., & Abdelsalam, M. M. (2026). Detecting Low-Orbit Satellites via Adaptive Optics Based on Deep Learning Algorithms. Automation, 7(1), 14. https://doi.org/10.3390/automation7010014

Article Metrics

Back to TopTop