Abstract
This paper examines the application of Artificial Intelligence (AI) to protect satellite communication networks, focusing on the identification and prevention of cyber threats. With the rapid development of the commercial space sector, the importance of effective cyber defense has grown due to the increasing dependence of global infrastructure on satellite technologies. The study applies a structured comparative analysis of AI methods across three main satellite architectures: geostationary (GEO), low Earth orbit (LEO), and hybrid systems. The methodology is based on guiding research question and evaluates representative AI algorithms in the context of specific threat scenarios, including jamming, spoofing, DDoS attacks, and signal interception. Real-world cases such as the KA-SAT AcidRain attack and reported Starlink jamming in Ukraine, as well as experimental demonstrations of RL-based anti-jamming and GNN/DQN routing, are used to provide evidence of practical applicability. The results highlight both the potential and limitations of AI solutions, showing measurable improvements in detection accuracy, throughput, latency reduction, and resilience under interference. Architectural approaches for integrating AI into satellite security are presented, and their effectiveness, trade-offs, and deployment feasibility are discussed.
1. Introduction
Modern satellite communication networks have become critical infrastructure that supports multiple sectors, from global communications and navigation to financial services and defense systems [1]. The rapid penetration of private companies and the growing number of satellites have expanded the scope of space services, but at the same time increased exposure to cyber threats. In 2022, a large-scale cyberattack with the AcidRain malware against the European satellite operator KA-SAT was recorded, which disabled thousands of consumer terminals and even affected the remote control of wind turbines [2]. Cases of jamming the Starlink satellite Internet terminals were also reported during the military conflict in Ukraine, which required urgent software updates [2]. These examples are a demonstration of the complex threats to satellite systems and the need for more effective security measures.
Artificial intelligence (AI) is an emerging field that shows significant potential for enhancing the security and resilience of space communications. AI technologies such as machine learning, deep neural networks and reinforcement learning can process large volumes of satellite data and detect anomalies in real time. Unlike traditional static methods (e.g., fixed thresholds and classic filters), AI-based solutions adapt to changing attack patterns and can dynamically differentiate normal behavior from malicious behavior [1,2]. In this way, a more active and flexible defense is achieved, which guarantees the continuous and secure operation of satellite systems even in a complex and threat-rich environment.
Nevertheless, previous research shows important limitations. Many studies assess AI-based methods under terrestrial or idealized assumptions without accounting for satellite-specific constraints such as high latency in GEO or frequent handovers in LEO [3,4]. Comparative evaluations between algorithm families are often fragmented, with limited evidence on their effectiveness in concrete attack scenarios such as jamming or spoofing [5,6]. Moreover, practical deployment challenges—including limited on-board computational resources, energy constraints, and difficulties in real-time model updates—are often underreported [7]. These limitations point to the need for a structured comparative analysis that links AI algorithms to specific satellite architectures and operational environments.
In this context, the objective of this paper is to investigate the applicability of AI-based approaches for the defense of satellite communication networks and to evaluate which of them could be practically implemented.
2. Methodology
The methodological approach of this paper is grounded in a structured comparative analysis that extends beyond a narrative review. The central research question guiding the study is the following: “What architectural and algorithmic approaches with artificial intelligence are applicable to enhance the security of satellite communication networks, and which of them can be practically deployed?” To answer this question, the analysis was structured around three key perspectives: the architectural specifics of GEO, LEO, and hybrid satellite networks; the suitability of different AI models; and the practical feasibility of these models under real-world constraints such as latency, computational resources, and energy efficiency.
The research process began with a systematic review of peer-reviewed publications and official reports related to AI-based defense mechanisms in satellite and space-ground communication systems. Priority was given to studies that provide empirical results or quantitative evaluations rather than purely conceptual frameworks.
The evidence from the selected sources was synthesized into a comparative framework that maps AI algorithms to satellite architectures and threat categories. This synthesis highlights the advantages and limitations of each method, as well as the trade-offs between detection accuracy, inference latency and deployability. In this way, the methodology ensures that the study systematically structures existing knowledge, answers the formulated research question, and delivers an evidence-based comparative assessment supported by documented case studies and experimental results.
3. Satellite Networks: Architectures and Features
Satellite networks are critical to modern global telecommunications infrastructure. The continuous increase in requirements for broadband services, mobility, IoT connectivity, and low latency requires constant technological development and evolution in the architectural approaches of satellite systems [3]. In this context, geostationary (GEO), low-orbit (LEO), and hybrid satellite network architectures perform specific roles, tailored to the relevant technical characteristics and applications [4].
3.1. Geostationary (GEO) Networks
Geostationary (GEO) satellite networks are located in orbit about 35,786 km above the equator, which makes them stationary relative to the Earth’s surface. Because of this feature, GEO satellites provide stable and consistent coverage over a specific region, making them suitable for widespread communication applications such as television broadcasts and broadband communications in remote areas [5,6].
The main advantages of GEO networks include high coverage and connection stability, but a significant disadvantage is their high latency, reaching up to 600 ms, which limits their use for applications requiring real-time interaction [6]. New developments in GEO technologies include power and resource optimization by using optical communications and deep learning for optimal power distribution of satellite links [7]. To improve the efficiency of the communication channel, technologies such as “spot beam” are used—directed beams that reduce interference and increase frequency efficiency, as well as on-board processing (OBP), which allows data processing on board satellites [5,6].
3.2. Low Earth Orbit (LEO) Networks
Satellites in low Earth orbit (LEO) are located at altitudes between 500 and 2000 km. This significantly reduces the time for signal transmission (latency). Typical latency values for LEO architectures range from 20 to 40 ms, making them suitable for applications requiring real-time communication such as IoT, autonomous transport systems, and multimedia services [3,8]. The architectural approaches used in LEO systems include inter-satellite connections (ISLs), which allow for the direct communication between satellites and optimal traffic routing. Recent trends in LEO networks focus on the integration of artificial intelligence and machine learning for improved resource routing and management [3,8,9,10].
Successful implementation examples are algorithms based on graph neural networks (GNNs) and deep Q-learning (DQN), which optimize network resources and minimize route latency [10].
3.3. Hybrid Satellite Architectures (GEO–LEO–MEO)
Hybrid satellite architectures combine the different orbital levels (GEO, MEO, LEO) to achieve a balance between coverage, capacity, and latency of the system. In such architectures, LEO satellites provide low latency and a wide range for mobile applications, while GEO satellites provide stable, high-capacity backhole connections [4,6]. Modern hybrid systems are managed through software-defined networking (SDN) and network function virtualization (NFV), allowing for the dynamic allocation of network resources and intelligent connection management. For example, hybrid networks use DRL and multi-agent reinforcement learning (MARL) for adaptive resource allocation and real-time routing based on current load and environmental conditions [6,9,11].
4. Threats and Challenges to the Security of Satellite Communications
Satellite networks are exposed to a variety of cyber threats, similar to those of terrestrial communications, but complicated by the peculiarities of the space environment. The main targets of attacks can be classified according to the classical model of Confidentiality, Integrity, Availability (CIA)—violation of confidentiality (e.g., eavesdropping), integrity (e.g., spoofing), or availability (e.g., jamming or DoS/DDoS attacks).
The modern SCS (Satellite Communication System) integrates LEO mega-constellations with terrestrial networks to provide global coverage and low latency; at the same time, GNSS satellites in the MEO provide navigation services. Each of these elements is a potential target of cyberattacks, from jamming signals to user terminals to compromising ground gateways. Among the most common attacks are the following:
Eavesdropping: Due to the wide coverage of satellite beams, signals can be intercepted from a large area. Passive eavesdropping (reception only without interference) is difficult to detect and aims to steal confidential information. Active eavesdropping also involves protocol manipulation (e.g., key extraction through protocol weaknesses). The countermeasures here are strong encryption and authentication of traffic, but this must be balanced with limitations in the bandwidth of satellites [1].
Jamming: By transmitting a powerful radio signal over the frequency of the satellite link, an attacker can render it unusable. Broadband jamming (covering a large bandwidth) and narrowband jamming (targeting a specific channel) are distinguishable. Satellite systems are particularly vulnerable to jamming because the signal from space is weak—for example, a GEO signal with an EIRP of ~50 dBW can be jammed by a ground transmitter with a power of only 50 dBW—a few watts at a distance of hundreds of kilometers from the receiver. Mitigating techniques are frequency switching, beamforming, and artificial noise to mask the signal, as well as the use of adaptive filters on the receivers.
Spoofing: An integrity attack—the opponent emits a false signal which is imitating the real one to mislead the receiver. An example are GPS spoofing attacks, in which the receiver, in our case a drone, is deceived about its location. Spoofing can be a simple meaconing or generating a new signal with the same parameters (delay/modify). Cryptographic methods, multipath detectors, and comparison with inertial data are used for protection [1].
DoS/DDoS attacks: A denial of service (DoS) attack overloads the target system with a large volume of traffic, hindering the system’s ability to function normally. Attack, involving multiple devices is known as a distributed denial-of-service attack (DDoS). In a satellite system, the attacker aims to overload resources like sending a large number of requests to a satellite receiver or ground control segment to make them inaccessible. Satellite switching and memory capacity is limited, which makes them vulnerable to overload. Satellite communications often rely on a small number of control centers—DDoS against a ground control segment or a satellite operator could temporarily paralyze the entire service. Against DoS, traffic filtering, capsule networks to recognize malicious patterns, etc., are applied [12].
Attacks on the energy subsystem: Satellite-specific attacks are those aimed at the energy supply. Satellites rely on solar panels and batteries—malicious commands could drain the battery (e.g., by continuously redirecting the payload or resetting frequently). “Kinetic” attacks use physical destruction—e.g., a strike from a kinetic interceptor (anti-satellite missile) or through a service satellite (which is already entering the topic of space defense).
Vulnerabilities in the ground segment: Often the “weakest link” are ground stations and user terminals. Malware attacks have shown that by compromising ground devices, an adversary can crash an entire satellite service. AI can be used here to monitor system logs and network traffic on ground stations, for the purpose of early detection of intrusion or unusual activity. But the change in threats requires a new approach [13].
5. AI Architectures to Secure Satellite Networks
The integration of artificial intelligence in satellite security systems requires architectural innovations both at the space segment level and at the ground segment level. The main goals are autonomy, distribution, and timeliness of protective measures. Traditionally, the security of satellites has been ensured mainly by ground control centers through offline telemetry analysis, manual updates, and reactions [14].
5.1. Decentralized IDS in Orbit and on the Ground
Modern approaches offer a distributed intrusion detection infrastructure (IDS), where part of the analysis is performed on board satellites and part on the ground. For example, in the recently proposed CANSat-IDS architecture for satellites, machine learning and deep learning are combined to classify network traffic using the CAN control protocol on board. Transient anomalies are detected by a light model operating in real time on the satellite (analyzing telemetry and commands for short deviations), while content analysis (at the level of packet contents, heavier ML models) is performed on the ground segment. This achieves a balance—the onboard IDS is fast and with low computational requirements, and the ground IDS is thorough, but with some delay [15].
5.2. Edge AI and Smart Gateways
The concept of edge computing—outsourced computing at the edge of the network—finds native application in satellites as well. This kind of AI means that threat detection algorithms run close to the data source, i.e., in the satellite or in the local ground station, instead of centrally in the cloud. This is critical for satellite networks where the connection is relatively latent and has limited capacity. For example, imagine a LEO constellation delivering the Internet—for each packet, it is inefficient to send a copy to a central IDS on the ground. An AI model of the satellite itself could flag anomalous traffic or suspicious patterns (abnormal behavior of a user terminal, etc.) in real time [15]. Paired with ML, incoming/outgoing satellite traffic is monitored for known attack signatures or deviations from the network profile. Edge AI architecture reduces response time (low inference latency) and reduces redundant traffic to the center but requires model synchronization—all edge nodes need to be periodically trained/updated to recognize the latest threats [16].
5.3. Federated Learning and Cooperative Networks
Because satellites generate sensitive data and often do not have constant connectivity to each other, Federated Learning (FL) is an attractive approach. In FL, individual nodes (satellites or stations) train local ML models on their data, then only share parameters to a central aggregation coordinator. This achieves a common pattern without exposing the raw data. In the context of security, imagine a global satellite network where each satellite monitors a different traffic pattern—through FL, they could jointly train a single anomaly model that is more generally robust. A recently proposed FLOGA-AD framework combines federated anomalous detection with a genetic algorithm to optimize the choice of participating customers (e.g., to ignore compromised or low-quality nodes). This system uses a hybrid algorithm as the first convolutional vision transformer plus LSTM to analyze the time series of traffic. This demonstrates the power of combined techniques in a distributed environment [17]. FL mentions privacy concerns but emphasizes poisoning fears and trustworthy aggregator requirements. It recommends valid contribution through blockchain and statistical anomaly detection [18].
5.4. Integration of AI with Traditional Defenses
Artificial intelligence solution must have current security and protocols in place in order to function efficiently. Its roles include “smart orchestration” in optimizing the best routes in satellites and skipping harmful nodes. In adaptive cryptosystems, AI gauges in real-time attack potential and increases encryption or authentication as needed when threats emerge [19]. This “moving target” defense befuddles the aggressors through shifting configurations, as the AI guides the shift through past attack patterns. Such approaches are already demonstrating success—e.g., experiments have shown that dynamic channel shredding controlled by an RL agent can reduce the success rate of jamming by over 90% compared to static schemes [20].
6. Case Studies of Attack Scenarios
6.1. Case Study: KA-SAT AcidRain Attack (2022)
The KA-SAT incident in February 2022, in which the AcidRain malware disabled thousands of consumer terminals and disrupted the remote monitoring of 5,800 wind turbines in Germany, illustrates the systemic risks of satellite–ground integration [2]. This attack targeted the weakest link in the ground segment and demonstrated that vulnerabilities in user equipment can paralyze an entire satellite service. AI-based anomaly detection could mitigate such threats by analyzing firmware update patterns and terminal behavior. Machine learning classifiers such as Random Forest, known for their fast retraining and low computational requirements, are particularly suitable in this scenario [14]. In experimental comparisons, Random Forest classifiers have achieved detection accuracies above 95% with false alarm rates below 5%; metrics that could significantly reduce the probability of large-scale compromise.
6.2. Case Study: Starlink Jamming in Ukraine
During the conflict in Ukraine, deliberate jamming was reported against Starlink user terminals, forcing operators to release urgent software updates [21]. The reaction highlighted the limits of centralized human-in-the-loop defense and the need for adaptive AI-driven protection. Reinforcement learning (RL) has proven highly effective in mitigating jamming. For example, experimental studies show that dynamic channel allocation guided by RL agents reduced the success rate of jamming by over 90% compared to static spectrum allocation [20]. Such approaches are highly relevant for LEO constellations, where thousands of nodes require real-time decision-making. Automated RL mechanisms could drastically shorten the disruption window and reduce service downtime for users in conflict areas.
6.3. Experimental Validation: Reinforcement Learning Anti-Jamming
NASA’s SCaN testbed provided a practical demonstration of RL-driven anti-jamming strategies. Using Q-learning, the system adapted channel selection dynamically under sweep and Markov jammers. Results showed that the proportion of jam-free communication time increased from approximately 20% under baseline random channel selection to over 55% under RL control, representing an almost 180% improvement [20]. This constitutes one of the first in-orbit validations of AI-based countermeasures and directly proves the feasibility of RL approaches in operational satellite environments.
6.4. Comparative Evaluation: GEO Beamforming and LEO Routing
Artificial intelligence has also been applied to improve resource utilization and robustness at the architectural level. In GEO systems, MVDR beamforming algorithms, combined with adaptive AI control, sustained SINR values of ~12 dB under strong jamming, compared to ~7 dB achieved with classical beamforming, effectively keeping the link operational at interference levels that would otherwise disrupt communication [22]. In LEO constellations, routing algorithms integrating Graph Neural Networks and Deep Q-Networks demonstrated throughput improvements of +29.47% and end-to-end latency reductions of −39.76% compared to shortest-path algorithms such as Dijkstra [10]. These figures confirm that different orbital regimes require different AI strategies: GEO benefits from advanced signal processing and resource allocation, while LEO gains most from intelligent, topology-aware routing.
6.5. Broader Implications: GNSS Spoofing and Multi-Orbit Resilience
Spoofing of navigation signals has been documented in multiple regions, including the Black Sea and the Middle East, leading to measurable disruptions in aviation and maritime navigation [1,2]. For MEO navigation constellations, AI can provide spoofing detection by combining signal authentication with pattern recognition. For example, hybrid GNN+LSTM models trained on temporal Doppler shift and phase characteristics have achieved detection accuracies above 97%, making them highly relevant for resilient GNSS. Moreover, hybrid GEO–LEO architectures offer unique opportunities for cooperative AI defense: GEO satellites provide stable, high-capacity backbone connectivity, while LEO satellites supply redundancy and rapid rerouting under attack conditions.
7. Discussion
7.1. Security in Orbital Architectures
As already mentioned, different orbits (GEO, LEO, MEO, HEO, IGSO) offer specific advantages and disadvantages in terms of security, which also determines the choice of AI solutions. In GEO systems, satellites are few in number and expensive—here, the focus is on robust protection of the individual node. AI approaches could focus on increasing the satellite’s autonomy: for example, a sophisticated AI IDS on board, as resources may allow it (geostationary satellites often have powerful processors and plenty of power from their large panels). Also, GEO covers large areas—if an AI detects an attack (e.g., jamming) on a GEO satellite, it can notify many users. LEO constellations, on the other hand, have multiplicity and dynamics—tens of nodes always visible from a point on Earth [21]. This allows for collective security: satellites can share threat information with each other (e.g., via cross-satellite links) and support each other. If a LEO satellite is attacked (for example, through a ground-based laser blinding signal or a hack in its software), neighboring satellites can quickly take over its traffic or isolate it from the network. AI here will benefit cooperative decision-making—for example, a federated training of anomalous patterns between LEO nodes, so that the entire constellation “knows” what happened to one of them and reacts in a coordinated manner. LEOs have low latency, which means that even a centralized AI controller on the ground can manage security in near real-time—in Starlink, such a thing is performed centrally (SpaceX releases patches and commands the network in case of incidents) [22], but this is expected to be automated with AI, given the scale of thousands of satellites. MEO satellites (navigation) are more special: they represent an infrastructure that must be extremely reliable (GPS signals are used by aviation, military, financial systems). At the same time, MEO satellites have more limited contact with the ground segment (several times a day). This implies higher autonomy—AI on board, capable of detecting, for example, spoofing or anomalous operation of its own systems and applying a safe mode. Navigation systems suffer mainly from spoofing and jamming, so the AI there would be trained specifically for these threats–e.g., a deep filter of the navigation signal, distinguishing authentic from false signals by subtle features (phase, Doppler). HEO (lightning) and IGSO—their use cases (high-latitude communications, regional services) make them similar to a combination of GEO and LEO: periodically, the satellite approaches the Earth (low perigee); then there is a large communication capacity, and AI can update models or report data; then it goes far (high apogee); then it works more autonomously. This cyclical nature is actually conducive to an AI update loop: heavy training or parameter synchronization can be planned during the near passage, and when moving away, inference and execution. For the security of HEOs (e.g., Russian “Molniya” for military communications), an important factor is that they cover certain Earth regions for a long time—an adversary can adapt and specifically target attacks when the satellite is most vulnerable (e.g., at apogee, where the signal strength to the Earth is weakest) [23]. AI can counteract by prediction: knowing the orbital position and incident history, the system can “guard more” the satellite at certain orbital positions (for example, increase the power level or switch to a more secure mode at apogee, if this was the preferred moment for attacks).
7.2. Comparison of AI Approaches—Advantages and Disadvantages
Not all algorithms or strategies are equivalent in a space context [24,25,26,27,28,29]. It is useful to summarize their pros and cons as shown in Table 1:
Table 1.
AI approaches—advantages and disadvantages.
8. Conclusions
This paper examined the role of artificial intelligence as a means of enhancing the security of satellite systems by conducting a comparative analysis of different algorithmic approaches and their applicability to various orbital architectures.
The results of the study show AI enhances resilience against major threats. Case studies like the KA-SAT AcidRain incident and Starlink jamming attacks reveal vulnerabilities in ground and space segments while showcasing AI’s anomaly detection and adaptive spectrum management potential. Evidence from NASA’s SCaN reinforcement learning testbed shows RL algorithms significantly boost jam-free transmission time. GEO networks use AI-assisted beamforming for higher SINR under interference, while LEO constellations benefit from GNN and DQN routing, improving throughput and reducing latency. Additionally, AI-driven spoofing detection methods for MEO navigation systems have over 97% detection accuracy, underscoring their importance for resilient GNSS services.
Several important trends are on the horizon. (i) The full integration of AI into operations—AI is expected to become a standard part of the onboard software of satellites (as today they have a navigation module, tomorrow—a cyber defense module with AI). This implies the creation of standards and certifications for AI in space, so that there is confidence that these models operate correctly and safely. (ii) Federated and Transfer Learning on a large scale—as the number of satellites increases, FL can be applied even inter-systematically (e.g., satellites of different operators sharing anonymized threat knowledge), which will increase collective security. (iii) Defense against adversarial AI: mil-sat ops will utilize meta-detectors in detecting lies, important as enemies do target mil-sats.
Future research will focus on integrating AI modules as standard components of satellite communication systems, with attention to explainability, robustness against adversarial manipulation, and standardized certification. The growing evidence from documented incidents and experimental demonstrations suggests that AI can move from theory to practice in the protection of satellite networks, establishing itself as a cornerstone of next-generation space cybersecurity.
Author Contributions
Conceptualization, methodology, validation, investigation, writing—review and editing, R.D., M.S., G.T., and S.A. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the scientific research project № 253CH0001-04 “Development of infrastructure and environment for aerospace education and research at TU-Sofia /INSATUS/” by the contract with “Research and development sector at TU-Sofia”.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Acknowledgments
The author would like to thank the Research and Development Sector at the Technical University of Sofia for the financial support.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Khan, S.K.; Shiwakoti, N.; Diro, A.; Molla, A.; Gondal, I.; Warren, M. Space cybersecurity challenges, mitigation techniques, anticipated readiness, and future directions. Int. J. Crit. Infrastruct. Prot. 2024, 47, 100724. [Google Scholar] [CrossRef]
- Kang, M.; Park, S.; Lee, Y. A Survey on Satellite Communication System Security. Sensors 2024, 24, 2897. [Google Scholar] [CrossRef]
- Darwish, T.; Kurt, G.K.; Yanikomeroglu, H.; Bellemare, M.; Lamontagne, G. LEO Satellites in 5G and Beyond Networks: A Review From a Standardization Perspective. IEEE Access 2022, 10, 35040–35060. [Google Scholar] [CrossRef]
- Zhang, L.; Wu, S.; Lv, X.; Jiao, J. A Two-Step Handover Strategy for GEO/LEO Heterogeneous Satellite Networks Based on Multi-Attribute Decision Making. Electronics 2022, 11, 795. [Google Scholar] [CrossRef]
- Li, G.; Li, T.; Yue, X.; Hou, T.; Dai, B. High Reliable Uplink Transmission Methods in GEO–LEO Heterogeneous Satellite Network. Appl. Sci. 2023, 13, 8611. [Google Scholar] [CrossRef]
- Lv, W.; Yang, P.; Ding, Y.; Wang, Z.; Lin, C.; Wang, Q. Energy-Efficient and QoS-Aware Computation Offloading in GEO/LEO Hybrid Satellite Networks. Remote. Sens. 2023, 15, 3299. [Google Scholar] [CrossRef]
- Kapsis, T.T.; Lyras, N.K.; Panagopoulos, A.D. Optimal Power Allocation in Optical GEO Satellite Downlinks Using Model-Free Deep Learning Algorithms. Electronics 2024, 13, 647. [Google Scholar] [CrossRef]
- Choi, H.; Pack, S. Cooperative Downloading for LEO Satellite Networks: A DRL-Based Approach. Sensors 2022, 22, 6853. [Google Scholar] [CrossRef]
- Xia, L.; Lin, B.; Zhao, S.; Zhao, Y. A Centralized–Distributed Joint Routing Algorithm for LEO Satellite Constellations Based on Multi-Agent Reinforcement Learning. Appl. Sci. 2025, 15, 4664. [Google Scholar] [CrossRef]
- Shi, Y.; Wang, W.; Zhu, X.; Zhu, H. Low Earth Orbit Satellite Network Routing Algorithm Based on Graph Neural Networks and Deep Q-Network. Appl. Sci. 2024, 14, 3840. [Google Scholar] [CrossRef]
- Tirmizi, S.B.R.; Chen, Y.; Lakshminarayana, S.; Feng, W.; Khuwaja, A.A. Hybrid Satellite–Terrestrial Networks toward 6G: Key Technologies and Open Issues. Sensors 2022, 22, 8544. [Google Scholar] [CrossRef] [PubMed]
- Diro, A.; Kaisar, S.; Vasilakos, A.V.; Anwar, A.; Nasirian, A.; Olani, G. Anomaly detection for space information networks: A survey of challenges, techniques, and future directions. Comput. Secur. 2024, 139, 103705. [Google Scholar] [CrossRef]
- Zhuo, M.; Liu, L.; Zhou, S.; Tian, Z. Survey on security issues of routing and anomaly detection for space information networks. Sci. Rep. 2021, 11, 1–18. [Google Scholar] [CrossRef]
- Ali, M.L.; Thakur, K.; Schmeelk, S.; Debello, J.; Dragos, D. Deep Learning vs. Machine Learning for Intrusion Detection in Computer Networks: A Comparative Study. Appl. Sci. 2025, 15, 1903. [Google Scholar] [CrossRef]
- Smith, E. Implementing Cybersecurity Solutions for Space Network Protection. Available online: https://csiac.dtic.mil/articles/implementing-cybersecurity-solutions-for-space-network-protection/ (accessed on 12 June 2025).
- Wang, Z.; Cao, J.; Di, X. Anomaly detection method for satellite networks based on genetic optimization federated learning. Expert Syst. Appl. 2025, 295, 128627. [Google Scholar] [CrossRef]
- Driouch, O.; Bah, S.; Guennoun, Z. CANSat-IDS: An adaptive distributed Intrusion Detection System for satellites, based on combined classification of CAN traffic. Comput. Secur. 2024, 146, 104033. [Google Scholar] [CrossRef]
- Le, H.D.; Park, M. Enhancing Multi-Class Attack Detection in Graph Neural Network through Feature Rearrangement. Electronics 2024, 13, 2404. [Google Scholar] [CrossRef]
- Azar, A.T.; Shehab, E.; Mattar, A.M.; Hameed, I.A.; Elsaid, S.A. Deep Learning Based Hybrid Intrusion Detection Systems to Protect Satellite Networks. J. Netw. Syst. Manag. 2023, 31, 82. [Google Scholar] [CrossRef]
- Jiang, C.; Wang, X.; Wang, J.; Chen, H.H.; Ren, Y. Security in Space Information Networks. IEEE Commun. Mag. 2015, 53, 82–88. [Google Scholar] [CrossRef]
- Chen, Q.; Wang, Z.; Chen, X.; Wen, J.; Zhou, D.; Ji, S.; Sheng, M.; Huang, K. Space–Ground Fluid AI for 6G Edge Intelligence. Engineering 2025, 54, 14–19. [Google Scholar] [CrossRef]
- Starlink and Other LEO Constellations Face a New Set of Security Risks. Available online: https://spectrum.ieee.org/satellite-jamming (accessed on 4 May 2025).
- Jeon, S.; Kwak, J.; Choi, J.P. Advanced Multibeam Satellite Network Security with Encryption and Beamforming Technologies. In Proceedings of the 2022 IEEE International Conference on Communications Workshops (ICC Workshops), Seoul, Republic of Korea, 16–20 May 2022; pp. 1177–1182. [Google Scholar]
- Jenkins, C.; Vugrin, E.; Manickam, I.; Krakowiak, S.; Richard, G.B.; Hazelbaker, J.; Troutman, N.; Maxwell, J. Moving Target Defense for Space Systems. In Proceedings of the Malware Technical Exchange Meeting, Albuquerque, New Mexico, 13–15 July 2021. [Google Scholar]
- Ahmad, I.; Suomalainen, J.; Porambage, P.; Gurtov, A.; Huusko, J.; Höyhtyä, M. Satellite-Terrestrial Integrated Networks: Architecture, Security Challenges, and Solutions. IEEE Netw. 2019, 33, 22–28. [Google Scholar] [CrossRef]
- Luo, P.; Wang, B.; Tian, J.; Liu, C.; Yang, Y. Adversarial Attacks against Deep-Learning-Based Automatic Dependent Surveillance-Broadcast Unsupervised Anomaly Detection Models in the Context of Air Traffic Management. Sens. 2024, 24, 3584. [Google Scholar] [CrossRef] [PubMed]
- Falco, G. Cybersecurity Principles for Space Systems. J. Aerosp. Inf. Syst. 2019, 16, 61–70. [Google Scholar] [CrossRef]
- Maple, C.; Epiphaniou, G.; Hathal, W.; Atmaca, U.I.; Sheik, A.T.; Cruickshank, H.; Falco, G. The Impact of Message Encryption on Teleoperation for Space Applications. In Proceedings of the 2022 IEEE Aerospace Conference (AERO), Big Sky, MT, USA, 5–12 March 2022; pp. 1–10. [Google Scholar] [CrossRef]
- Zhan, Y.; Zeng, G.; Pan, X. Networked TT&C for mega satellite constellations: A security perspective. China Commun. 2022, 19, 58–76. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.