Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (927)

Search Parameters:
Keywords = mission management

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3548 KiB  
Article
A Fault Diagnosis Framework for Waterjet Propulsion Pump Based on Supervised Autoencoder and Large Language Model
by Zhihao Liu, Haisong Xiao, Tong Zhang and Gangqiang Li
Machines 2025, 13(8), 698; https://doi.org/10.3390/machines13080698 - 7 Aug 2025
Abstract
The ship waterjet propulsion system is a crucial power unit for high-performance vessels, and the operational state of its core component, the waterjet pump, is directly related to navigation safety and mission reliability. To enhance the intelligence and accuracy of pump fault diagnosis, [...] Read more.
The ship waterjet propulsion system is a crucial power unit for high-performance vessels, and the operational state of its core component, the waterjet pump, is directly related to navigation safety and mission reliability. To enhance the intelligence and accuracy of pump fault diagnosis, this paper proposes a novel diagnostic framework that integrates a supervised autoencoder (SAE) with a large language model (LLM). This framework first employs an SAE to perform task-oriented feature learning on raw vibration signals collected from the pump’s guide vane casing. By jointly optimizing reconstruction and classification losses, the SAE extracts deep features that both represent the original signal information and exhibit high discriminability for different fault classes. Subsequently, the extracted feature vectors are converted into text sequences and fed into an LLM. Leveraging the powerful sequential information processing and generalization capabilities of LLM, end-to-end fault classification is achieved through parameter-efficient fine-tuning. This approach aims to avoid the traditional dependence on manually extracted time-domain and frequency-domain features, instead guiding the feature extraction process via supervised learning to make it more task-specific. To validate the effectiveness of the proposed method, we compare it with a baseline approach that uses manually extracted features. In two experimental scenarios, direct diagnosis with full data and transfer diagnosis under limited-data, cross-condition settings, the proposed method significantly outperforms the baseline in diagnostic accuracy. It demonstrates excellent performance in automated feature extraction, diagnostic precision, and small-sample data adaptability, offering new insights for the application of large-model techniques in critical equipment health management. Full article
(This article belongs to the Special Issue Fault Diagnosis and Fault Tolerant Control in Mechanical System)
Show Figures

Figure 1

24 pages, 2345 KiB  
Article
Towards Intelligent 5G Infrastructures: Performance Evaluation of a Novel SDN-Enabled VANET Framework
by Abiola Ifaloye, Haifa Takruri and Rabab Al-Zaidi
Network 2025, 5(3), 28; https://doi.org/10.3390/network5030028 - 5 Aug 2025
Abstract
Critical Internet of Things (IoT) data in Fifth Generation Vehicular Ad Hoc Networks (5G VANETs) demands Ultra-Reliable Low-Latency Communication (URLLC) to support mission-critical vehicular applications such as autonomous driving and collision avoidance. Achieving the stringent Quality of Service (QoS) requirements for these applications [...] Read more.
Critical Internet of Things (IoT) data in Fifth Generation Vehicular Ad Hoc Networks (5G VANETs) demands Ultra-Reliable Low-Latency Communication (URLLC) to support mission-critical vehicular applications such as autonomous driving and collision avoidance. Achieving the stringent Quality of Service (QoS) requirements for these applications remains a significant challenge. This paper proposes a novel framework integrating Software-Defined Networking (SDN) and Network Functions Virtualisation (NFV) as embedded functionalities in connected vehicles. A lightweight SDN Controller model, implemented via vehicle on-board computing resources, optimised QoS for communications between connected vehicles and the Next-Generation Node B (gNB), achieving a consistent packet delivery rate of 100%, compared to 81–96% for existing solutions leveraging SDN. Furthermore, a Software-Defined Wide-Area Network (SD-WAN) model deployed at the gNB enabled the efficient management of data, network, identity, and server access. Performance evaluations indicate that SDN and NFV are reliable and scalable technologies for virtualised and distributed 5G VANET infrastructures. Our SDN-based in-vehicle traffic classification model for dynamic resource allocation achieved 100% accuracy, outperforming existing Artificial Intelligence (AI)-based methods with 88–99% accuracy. In addition, a significant increase of 187% in flow rates over time highlights the framework’s decreasing latency, adaptability, and scalability in supporting URLLC class guarantees for critical vehicular services. Full article
22 pages, 3217 KiB  
Article
A Deep Reinforcement Learning Approach for Energy Management in Low Earth Orbit Satellite Electrical Power Systems
by Silvio Baccari, Elisa Mostacciuolo, Massimo Tipaldi and Valerio Mariani
Electronics 2025, 14(15), 3110; https://doi.org/10.3390/electronics14153110 - 5 Aug 2025
Viewed by 77
Abstract
Effective energy management in Low Earth Orbit satellites is critical, as inefficient energy management can significantly affect mission objectives. The dynamic and harsh space environment further complicates the development of effective energy management strategies. To address these challenges, we propose a Deep Reinforcement [...] Read more.
Effective energy management in Low Earth Orbit satellites is critical, as inefficient energy management can significantly affect mission objectives. The dynamic and harsh space environment further complicates the development of effective energy management strategies. To address these challenges, we propose a Deep Reinforcement Learning approach using Deep-Q Network to develop an adaptive energy management framework for Low Earth Orbit satellites. Compared to traditional techniques, the proposed solution autonomously learns from environmental interaction, offering robustness to uncertainty and online adaptability. It adjusts to changing conditions without manual retraining, making it well-suited for handling modeling uncertainties and non-stationary dynamics typical of space operations. Training is conducted using a realistic satellite electric power system model with accurate component parameters and single-orbit power profiles derived from real space missions. Numerical simulations validate the controller performance across diverse scenarios, including multi-orbit settings, demonstrating superior adaptability and efficiency compared to conventional Maximum Power Point Tracking methods. Full article
Show Figures

Figure 1

20 pages, 413 KiB  
Article
Spectral Graph Compression in Deploying Recommender Algorithms on Quantum Simulators
by Chenxi Liu, W. Bernard Lee and Anthony G. Constantinides
Computers 2025, 14(8), 310; https://doi.org/10.3390/computers14080310 - 1 Aug 2025
Viewed by 196
Abstract
This follow-up scientific case study builds on prior research to explore the computational challenges of applying quantum algorithms to financial asset management, focusing specifically on solving the graph-cut problem for investment recommendation. Unlike our prior study, which focused on idealized QAOA performance, this [...] Read more.
This follow-up scientific case study builds on prior research to explore the computational challenges of applying quantum algorithms to financial asset management, focusing specifically on solving the graph-cut problem for investment recommendation. Unlike our prior study, which focused on idealized QAOA performance, this work introduces a graph compression pipeline that enables QAOA deployment under real quantum hardware constraints. This study investigates quantum-accelerated spectral graph compression for financial asset recommendations, addressing scalability and regulatory constraints in portfolio management. We propose a hybrid framework combining the Quantum Approximate Optimization Algorithm (QAOA) with spectral graph theory to solve the Max-Cut problem for investor clustering. Our methodology leverages quantum simulators (cuQuantum and Cirq-GPU) to evaluate performance against classical brute-force enumeration, with graph compression techniques enabling deployment on resource-constrained quantum hardware. The results underscore that efficient graph compression is crucial for successful implementation. The framework bridges theoretical quantum advantage with practical financial use cases, though hardware limitations (qubit counts, coherence times) necessitate hybrid quantum-classical implementations. These findings advance the deployment of quantum algorithms in mission-critical financial systems, particularly for high-dimensional investor profiling under regulatory constraints. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

59 pages, 2417 KiB  
Review
A Critical Review on the Battery System Reliability of Drone Systems
by Tianren Zhao, Yanhui Zhang, Minghao Wang, Wei Feng, Shengxian Cao and Gong Wang
Drones 2025, 9(8), 539; https://doi.org/10.3390/drones9080539 - 31 Jul 2025
Viewed by 459
Abstract
The reliability of unmanned aerial vehicle (UAV) energy storage battery systems is critical for ensuring their safe operation and efficient mission execution, and has the potential to significantly advance applications in logistics, monitoring, and emergency response. This paper reviews theoretical and technical advancements [...] Read more.
The reliability of unmanned aerial vehicle (UAV) energy storage battery systems is critical for ensuring their safe operation and efficient mission execution, and has the potential to significantly advance applications in logistics, monitoring, and emergency response. This paper reviews theoretical and technical advancements in UAV battery reliability, covering definitions and metrics, modeling approaches, state estimation, fault diagnosis, and battery management system (BMS) technologies. Based on international standards, reliability encompasses performance stability, environmental adaptability, and safety redundancy, encompassing metrics such as the capacity retention rate, mean time between failures (MTBF), and thermal runaway warning time. Modeling methods for reliability include mathematical, data-driven, and hybrid models, which are evaluated for accuracy and efficiency under dynamic conditions. State estimation focuses on five key battery parameters and compares neural network, regression, and optimization algorithms in complex flight scenarios. Fault diagnosis involves feature extraction, time-series modeling, and probabilistic inference, with multimodal fusion strategies being proposed for faults like overcharge and thermal runaway. BMS technologies include state monitoring, protection, and optimization, and balancing strategies and the potential of intelligent algorithms are being explored. Challenges in this field include non-unified standards, limited model generalization, and complexity in diagnosing concurrent faults. Future research should prioritize multi-physics-coupled modeling, AI-driven predictive techniques, and cybersecurity to enhance the reliability and intelligence of battery systems in order to support the sustainable development of unmanned systems. Full article
Show Figures

Figure 1

18 pages, 2894 KiB  
Article
Technology Roadmap Methodology and Tool Upgrades to Support Strategic Decision in Space Exploration
by Giuseppe Narducci, Roberta Fusaro and Nicole Viola
Aerospace 2025, 12(8), 682; https://doi.org/10.3390/aerospace12080682 - 30 Jul 2025
Viewed by 134
Abstract
Technological roadmaps are essential tools for managing and planning complex projects, especially in the rapidly evolving field of space exploration. Defined as dynamic schedules, they support strategic and long-term planning while coordinating current and future objectives with particular technology solutions. Currently, the available [...] Read more.
Technological roadmaps are essential tools for managing and planning complex projects, especially in the rapidly evolving field of space exploration. Defined as dynamic schedules, they support strategic and long-term planning while coordinating current and future objectives with particular technology solutions. Currently, the available methodologies are mostly built on experts’ opinions and in just few cases, methodologies and tools have been developed to support the decision makers with a rational approach. In any case, all the available approaches are meant to draw “ideal” maturation plans. Therefore, it is deemed essential to develop an integrate new algorithms able to decision guidelines on “non-nominal” scenarios. In this context, Politecnico di Torino, in collaboration with the European Space Agency (ESA) and Thales Alenia Space–Italia, developed the Technology Roadmapping Strategy (TRIS), a multi-step process designed to create robust and data-driven roadmaps. However, one of the main concerns with its initial implementation was that TRIS did not account for time and budget estimates specific to the space exploration environment, nor was it capable of generating alternative development paths under constrained conditions. This paper discloses two main significant updates to TRIS methodology: (1) improved time and budget estimation to better reflect the specific challenges of space exploration scenarios and (2) the capability of generating alternative roadmaps, i.e., alternative technological maturation paths in resource-constrained scenarios, balancing financial and temporal limitations. The application of the developed routines to available case studies confirms the tool’s ability to provide consistent planning outputs across multiple scenarios without exceeding 20% deviation from expert-based judgements available as reference. The results demonstrate the potential of the enhanced methodology in supporting strategic decision making in early-phase mission planning, ensuring adaptability to changing conditions, optimized use of time and financial resources, as well as guaranteeing an improved flexibility of the tool. By integrating data-driven prioritization, uncertainty modeling, and resource-constrained planning, TRIS equips mission planners with reliable tools to navigate the complexities of space exploration projects. This methodology ensures that roadmaps remain adaptable to changing conditions and optimized for real-world challenges, supporting the sustainable advancement of space exploration initiatives. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

23 pages, 794 KiB  
Article
Assessing Safety Professional Job Descriptions Using Integrated Multi-Criteria Analysis
by Mohamed Zytoon and Mohammed Alamoudi
Safety 2025, 11(3), 72; https://doi.org/10.3390/safety11030072 - 29 Jul 2025
Viewed by 259
Abstract
Introduction: Poorly designed safety job descriptions may have a negative impact on occupational safety and health (OSH) performance. Firstly, they limit the chances of hiring highly qualified safety professionals who are vital to the success of OSH management systems in organizations. Secondly, the [...] Read more.
Introduction: Poorly designed safety job descriptions may have a negative impact on occupational safety and health (OSH) performance. Firstly, they limit the chances of hiring highly qualified safety professionals who are vital to the success of OSH management systems in organizations. Secondly, the relationship between the presence of qualified safety professionals and the safety culture (and performance) in an organization is reciprocal. Thirdly, the low quality of job descriptions limits exploring the proper competencies needed by safety professionals before they are hired. The safety professional is thus uncertain of what level of education or training and which skills they should attain. Objectives: The main goal of the study is to integrate the analytic hierarchy process (AHP) and the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) with importance–performance analysis (IPA) to evaluate job descriptions in multiple sectors. Results: The results of the study indicate that it is vital to clearly define job levels, the overall mission, key responsibilities, time-consuming tasks, required education/certifications, and necessary personal abilities in safety job descriptions. This clarity enhances recruitment, fairness, performance management, and succession planning. The organization can then attract and retain top talent, improve performance, foster a strong safety culture, create realistic job expectations, increase employee satisfaction and productivity, and ensure that competent individuals are hired, ultimately leading to a safer and more productive workplace. Conclusion: The outcomes of this study provide a robust framework that can and should be used as a guideline to professionalize job description development and enhance talent acquisition strategies. Full article
Show Figures

Figure 1

23 pages, 1667 KiB  
Review
Review of Advances in Multiple-Resolution Modeling for Distributed Simulation
by Luis Rabelo, Mario Marin, Jaeho Kim and Gene Lee
Information 2025, 16(8), 635; https://doi.org/10.3390/info16080635 - 25 Jul 2025
Viewed by 218
Abstract
Multiple-resolution modeling (MRM) has emerged as a foundational paradigm in modern simulation, enabling the integration of models with varying levels of granularity to address complex and evolving operational demands. By supporting seamless transitions between high-resolution and low-resolution representations, MRM facilitates scalability and interoperability, [...] Read more.
Multiple-resolution modeling (MRM) has emerged as a foundational paradigm in modern simulation, enabling the integration of models with varying levels of granularity to address complex and evolving operational demands. By supporting seamless transitions between high-resolution and low-resolution representations, MRM facilitates scalability and interoperability, particularly within distributed simulation environments such as military command and control systems. This paper provides a structured review and comparative analysis of prominent MRM methodologies, including multi-resolution entities (MRE), agent-based modeling (from a federation viewpoint), hybrid frameworks, and the novel MR mode, synchronizing resolution transitions with time advancement and interaction management. Each approach is evaluated across critical dimensions such as consistency, computational efficiency, flexibility, and integration with legacy systems. Emphasis is placed on the applicability of MRM in distributed military simulations, where it enables dynamic interplay between strategic-level planning and tactical-level execution, supporting real-time decision-making, mission rehearsal, and scenario-based training. The paper also explores emerging trends involving artificial intelligence (AI) and large language models (LLMs) as enablers for adaptive resolution management and automated model interoperability. Full article
(This article belongs to the Special Issue Editorial Board Members’ Collection Series: "Information Systems")
Show Figures

Figure 1

30 pages, 7472 KiB  
Article
Two Decades of Groundwater Variability in Peru Using Satellite Gravimetry Data
by Edgard Gonzales, Victor Alvarez and Kenny Gonzales
Appl. Sci. 2025, 15(14), 8071; https://doi.org/10.3390/app15148071 - 20 Jul 2025
Viewed by 525
Abstract
Groundwater is a critical yet understudied resource in Peru, where surface water has traditionally dominated national assessments. This study provides the first country-scale analysis of groundwater storage (GWS) variability in Peru from 2003 to 2023 using satellite gravimetry data from the Gravity Recovery [...] Read more.
Groundwater is a critical yet understudied resource in Peru, where surface water has traditionally dominated national assessments. This study provides the first country-scale analysis of groundwater storage (GWS) variability in Peru from 2003 to 2023 using satellite gravimetry data from the Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow-On (GRACE-FO) missions. We used the GRACE Data Assimilation-Data Mass Modeling (GRACE-DA-DM GLV3.0) dataset at 0.25° resolution to estimate annual GWS trends and evaluated the influence of El Niño–Southern Oscillation (ENSO) events and anthropogenic extraction, supported by in situ well data from six major aquifers. Results show a sustained GWS decline of 30–40% in coastal and Andean regions, especially in Lima, Ica, Arequipa, and Tacna, while the Amazon basin remained stable. Strong correlation (r = 0.95) between GRACE data and well records validate the findings. Annual precipitation analysis from 2003 to 2023, disaggregated by climatic zone, revealed nearly stable trends. Coastal El Niño events (2017 and 2023) triggered episodic recharge in the northern and central coastal regions, yet these were insufficient to reverse the sustained groundwater depletion. This research provides significant contributions to understanding the spatiotemporal dynamics of groundwater in Peru through the use of satellite gravimetry data with unprecedented spatial resolution. The findings reveal a sustained decline in GWS across key regions and underscore the urgent need to implement integrated water management strategies—such as artificial recharge, optimized irrigation, and satellite-based early warning systems—aimed at preserving the sustainability of the country’s groundwater resources. Full article
Show Figures

Figure 1

39 pages, 1775 KiB  
Article
A Survey on UAV Control with Multi-Agent Reinforcement Learning
by Chijioke C. Ekechi, Tarek Elfouly, Ali Alouani and Tamer Khattab
Drones 2025, 9(7), 484; https://doi.org/10.3390/drones9070484 - 9 Jul 2025
Viewed by 1486
Abstract
Unmanned Aerial Vehicles (UAVs) have become increasingly prevalent in both governmental and civilian applications, offering significant reductions in operational costs by minimizing human involvement. There is a growing demand for autonomous, scalable, and intelligent coordination strategies in complex aerial missions involving multiple Unmanned [...] Read more.
Unmanned Aerial Vehicles (UAVs) have become increasingly prevalent in both governmental and civilian applications, offering significant reductions in operational costs by minimizing human involvement. There is a growing demand for autonomous, scalable, and intelligent coordination strategies in complex aerial missions involving multiple Unmanned Aerial Vehicles (UAVs). Traditional control techniques often fall short in dynamic, uncertain, or large-scale environments where decentralized decision-making and inter-agent cooperation are crucial. A potentially effective technique used for UAV fleet operation is Multi-Agent Reinforcement Learning (MARL). MARL offers a powerful framework for addressing these challenges by enabling UAVs to learn optimal behaviors through interaction with the environment and each other. Despite significant progress, the field remains fragmented, with a wide variety of algorithms, architectures, and evaluation metrics spread across domains. This survey aims to systematically review and categorize state-of-the-art MARL approaches applied to UAV control, identify prevailing trends and research gaps, and provide a structured foundation for future advancements in cooperative aerial robotics. The advantages and limitations of these techniques are discussed along with suggestions for further research to improve the effectiveness of MARL application to UAV fleet management. Full article
Show Figures

Figure 1

17 pages, 2514 KiB  
Article
Forecasting Transient Fuel Consumption Spikes in Ships: A Hybrid DGM-SVR Approach
by Junhao Chen and Yan Peng
Eng 2025, 6(7), 151; https://doi.org/10.3390/eng6070151 - 3 Jul 2025
Viewed by 262
Abstract
Accurate prediction of ship fuel consumption is essential for improving energy efficiency, optimizing mission planning, and ensuring operational integrity at sea. However, during complex tasks such as high-speed maneuvers, fuel consumption exhibits complex dynamics characterized by the coexistence of baseline drift and transient [...] Read more.
Accurate prediction of ship fuel consumption is essential for improving energy efficiency, optimizing mission planning, and ensuring operational integrity at sea. However, during complex tasks such as high-speed maneuvers, fuel consumption exhibits complex dynamics characterized by the coexistence of baseline drift and transient peaks that conventional models often fail to capture accurately, particularly the abrupt peaks. In this study, a hybrid prediction model, DGM-SVR, is presented, combining a rolling dynamic grey model (DGM (1,1)) with support vector regression (SVR). The DGM (1,1) adapts to the dynamic fuel consumption baseline and trends via a rolling window mechanism, while the SVR learns and predicts the residual sequence generated by the DGM, specifically addressing the high-amplitude fuel spikes triggered by maneuvers. Validated on a simulated dataset reflecting typical fuel spike characteristics during high-speed maneuvers, the DGM-SVR model demonstrated superior overall prediction accuracy (MAPE and RMSE) compared to standalone DGM (1,1), moving average (MA), and SVR models. Notably, DGM-SVR reduced the test set’s MAPE and RMSE by approximately 21% and 34%, respectively, relative to the next-best DGM model, and significantly improved the predictive accuracy, magnitude, and responsiveness in predicting fuel consumption spikes. The findings indicate that the DGM-SVR hybrid strategy effectively fuses DGM’s trend-fitting strength with SVR’s proficiency in capturing spikes from the residual sequence, offering a more reliable and precise method for dynamic ship fuel consumption forecasting, with considerable potential for ship energy efficiency management and intelligent operational support. This study lays a foundation for future validation on real-world operational data. Full article
Show Figures

Figure 1

30 pages, 1426 KiB  
Article
Graduate Teaching Assistants (GTAs): Roles, Perspectives, and Prioritizing GTA Workforce Development Pathways
by Claire L. McLeod, Catherine B. Almquist, Madeline P. Ess, Jing Zhang, Hannah Schultz, Thao Nguyen, Khue Tran and Michael Hughes
Educ. Sci. 2025, 15(7), 838; https://doi.org/10.3390/educsci15070838 - 2 Jul 2025
Viewed by 666
Abstract
Graduate Teaching Assistants (GTAs) play a pivotal role in supporting and advancing the educational mission of universities globally. They are fundamental to a university’s instructional workforce and their roles are critical to the undergraduate student experience. This study examines the experiences and perceptions [...] Read more.
Graduate Teaching Assistants (GTAs) play a pivotal role in supporting and advancing the educational mission of universities globally. They are fundamental to a university’s instructional workforce and their roles are critical to the undergraduate student experience. This study examines the experiences and perceptions of GTAs (n = 74) at an R2 institution in the Midwest, U.S. Survey results reveal that the majority of surveyed GTAs have been at the institution for at least one year, teach in face-to-face formats with classes typically ranging from 12 to 30, and allocate 11–20 h/week to their instructional duties, although 30% of respondents report >20 h/week. Survey respondents reported a need for more teaching-focused onboarding, discipline-specific training, and more opportunities for feedback on their teaching practices, while almost 50% reported never engaging with discipline-based education research (DBER) literature. Although departmental and institutional training programs were acknowledged, so too was the perception of their lack of accessibility or relevance. Potential strategies for supporting GTAs, particularly early in their careers, include shadowing opportunities, sustained formal classroom management, and pedagogical training that includes an introduction to (and discussion of) the DBER literature, and a reduced teaching load in the first semester. Universities should prioritize and design GTA professional development using a cognitive apprenticeship framework. This would invest in the undergraduate student experience and directly support an institution’s educational mission. It is also highly effective in preparing highly skilled graduates to enter an increasingly connected global workforce and could positively contribute to an engaged alumni base. Full article
Show Figures

Figure 1

20 pages, 23317 KiB  
Article
Land Use and Land Cover (LULC) Mapping Accuracy Using Single-Date Sentinel-2 MSI Imagery with Random Forest and Classification and Regression Tree Classifiers
by Sercan Gülci, Michael Wing and Abdullah Emin Akay
Geomatics 2025, 5(3), 29; https://doi.org/10.3390/geomatics5030029 - 1 Jul 2025
Viewed by 607
Abstract
The use of Google Earth Engine (GEE), a cloud-based computing platform, in spatio-temporal evaluation studies has increased rapidly in natural sciences such as forestry. In this study, Sentinel-2 satellite imagery and Shuttle Radar Topography Mission (SRTM) elevation data and image classification algorithms based [...] Read more.
The use of Google Earth Engine (GEE), a cloud-based computing platform, in spatio-temporal evaluation studies has increased rapidly in natural sciences such as forestry. In this study, Sentinel-2 satellite imagery and Shuttle Radar Topography Mission (SRTM) elevation data and image classification algorithms based on two machine learning techniques were examined. Random Forest (RF) and Classification and Regression Trees (CART) were used to classify land use and land cover (LULC) in western Oregon (USA). To classify the LULC from the spectral bands of satellite images, a composition consisting of vegetation difference indices NDVI, NDWI, EVI, and BSI, and a digital elevation model (DEM) were used. The study area was selected due to a diversity of land cover types including research forest, botanical gardens, recreation area, and agricultural lands covered with diverse plant species. Five land classes (forest, agriculture, soil, water, and settlement) were delineated for LULC classification testing. Different spatial points (totaling 75, 150, 300, and 2500) were used as training and test data. The most successful model performance was RF, with an accuracy of 98% and a kappa value of 0.97, while the accuracy and kappa values for CART were 95% and 0.94, respectively. The accuracy of the generated LULC maps was evaluated using 500 independent reference points, in addition to the training and testing datasets. Based on this assessment, the RF classifier that included elevation data achieved an overall accuracy of 92% and a kappa coefficient of 0.90. The combination of vegetation difference indices with elevation data was successful in determining the areas where clear-cutting occurred in the forest. Our results present a promising technique for the detection of forests and forest openings, which was helpful in identifying clear-cut sites. In addition, the GEE and RF classifier can help identify and map storm damage, wind damage, insect defoliation, fire, and management activities in forest areas. Full article
Show Figures

Figure 1

30 pages, 5139 KiB  
Article
Design to Deployment: Flight Schedule-Based Analysis of Hybrid Electric Aircraft Variants in U.S. Regional Carrier Operations
by Emma Cassidy, Paul R. Mokotoff, Yilin Deng, Michael Ikeda, Kathryn Kirsch, Max Z. Li and Gokcin Cinar
Aerospace 2025, 12(7), 598; https://doi.org/10.3390/aerospace12070598 - 30 Jun 2025
Viewed by 329
Abstract
This study evaluates the feasibility and benefits of introducing battery-powered hybrid electric aircraft (HEA) into regional airline operations. Using 2019 U.S. domestic flight data, the ERJ175LR is selected as a representative aircraft, and several HEA variants are designed to match its mission profile [...] Read more.
This study evaluates the feasibility and benefits of introducing battery-powered hybrid electric aircraft (HEA) into regional airline operations. Using 2019 U.S. domestic flight data, the ERJ175LR is selected as a representative aircraft, and several HEA variants are designed to match its mission profile under different battery technologies and power management strategies. These configurations are then tested across over 800 actual daily flight sequences flown by a regional airline. The results show that well-designed HEA can achieve 3–7% fuel savings compared to conventional aircraft, with several variants able to complete all scheduled missions without disrupting turnaround times. These findings suggest that HEA can be integrated into today’s airline operations, particularly for short-haul routes, without the need for major infrastructure or scheduling changes, and highlight opportunities for future co-optimization of aircraft design and operations. Full article
Show Figures

Figure 1

24 pages, 1089 KiB  
Article
Dual-Chain-Based Dynamic Authentication and Handover Mechanism for Air Command Aircraft in Multi-UAV Clusters
by Jing Ma, Yuanbo Chen, Yanfang Fu, Zhiqiang Du, Xiaoge Yan and Guochuang Yan
Mathematics 2025, 13(13), 2130; https://doi.org/10.3390/math13132130 - 29 Jun 2025
Viewed by 225
Abstract
Cooperative multi-UAV clusters have been widely applied in complex mission scenarios due to their flexible task allocation and efficient real-time coordination capabilities. The Air Command Aircraft (ACA), as the core node within the UAV cluster, is responsible for coordinating and managing various tasks [...] Read more.
Cooperative multi-UAV clusters have been widely applied in complex mission scenarios due to their flexible task allocation and efficient real-time coordination capabilities. The Air Command Aircraft (ACA), as the core node within the UAV cluster, is responsible for coordinating and managing various tasks within the cluster. When the ACA undergoes fault recovery, a handover operation is required, during which the ACA must re-authenticate its identity with the UAV cluster and re-establish secure communication. However, traditional, centralized identity authentication and ACA handover mechanisms face security risks such as single points of failure and man-in-the-middle attacks. In highly dynamic network environments, single-chain blockchain architectures also suffer from throughput bottlenecks, leading to reduced handover efficiency and increased authentication latency. To address these challenges, this paper proposes a mathematically structured dual-chain framework that utilizes a distributed ledger to decouple the management of identity and authentication information. We formalize the ACA handover process using cryptographic primitives and accumulator functions and validate its security through BAN logic. Furthermore, we conduct quantitative analyses of key performance metrics, including time complexity and communication overhead. The experimental results demonstrate that the proposed approach ensures secure handover while significantly reducing computational burden. The framework also exhibits strong scalability, making it well-suited for large-scale UAV cluster networks. Full article
Show Figures

Figure 1

Back to TopTop