A Case for User-Centered Design in Satellite Command and Control
Abstract
:1. Introduction
1.1. Specific Design Challenges
- Mission: different missions such as communications, surveillance, reconnaissance, or experimental data collection (e.g., meteorology) all affect HMI requirements;
- Payload: the different payloads on each satellite may each require specific controls and displays to be included in the HMI;
- Orbit: the orbit of the satellite can greatly affect workflows for operators. Contact times (i.e., the time available to send commands and receive telemetry) can be limited to only minutes, or in theory, have no limit at all;
- Operational/manning profile: Some organizations or units may fly large constellations where several people are assigned to each vehicle or contact, each performing very specific tasks. Conversely, other organizations may fly fewer satellites and/or have a single operator performing all required tasks from a single HMI;
- Users: There are multiple user archetypes that are involved in SATC2. The HMI needs to provide capabilities to not only vehicle operators, but also payload operators, supervisors (e.g., mission or crew commanders), and a variety of “backroom” staff (e.g., engineers and orbital analysts).
1.2. User-Centered Design
1.3. Research Goals
- Performance and situation awareness (SA): increase user performance and SA of operators such that they can accomplish more tasks successfully, and perceive and react to anomalies more quickly;
- Workload: reduce the workload burden on operators so that they can apply cognitive energy to high-level problems that are beyond automation;
- Usability: create a more intuitive and enjoyable HMI than what operators currently experience;
- Transferability: develop a system where it is easier to become more proficient with less training.
2. Materials and Methods
2.1. User-Centered Design
2.1.1. UCD Framework and Components
2.1.2. Functional and Task Analysis
2.1.3. Rapid Prototyping
- Sketches: low-fidelity drawings to demonstrate a concept, usually developed with pen and paper or on a whiteboard;
- Wireframes: Medium-fidelity black and white layouts to allocate screen space to required controls and displays. Wireframes are usually developed with tools such as Balsamiq Mockups or Microsoft PowerPoint;
- Mockups: High fidelity renderings of what the implemented system will look like, including colors, iconography, and text hierarchy. Mockups are usually developed with tools such as Adobe Illustrator.
2.1.4. Knowledge Elicitation (KE) Interviews
2.1.5. Targeted Ideation Development Events (TIDEs) and Focus Groups
2.1.6. The UCD Framework in Practice
2.1.7. User-Driven Requirements
- Window Management: Some legacy systems require dozens of unique panes or windows to conduct basic tasks, which can create clutter and occlude critical information. Users require critical controls and displays to be consolidated into a single window with no modals or floating panes to obscure critical information. We used affordances from web browsers in our designs (e.g., tabs, hyperlinks, drag-and-drop interactions), since users indicated that they were comfortable juggling large amounts of information in browsers;
- Hyperlink Interactivity: Another browser affordance required by users was the use of hyperlinks to facilitate movement between different pages of information and/or widgets. This enabled more single-click navigation and reduced the time operators spent navigating drop-down context menus;
- Task-Driven Telemetry: Most legacy systems organize and display telemetry based on system configurations. We incorporated task-driven telemetry pages that were configured for specific tasks or maneuvers and added an ability for users to create and save custom pages of telemetry;
- Roll-up Status Information: Many legacy systems only flag specific telemetry points with anomalous values; some do not flag them at all. Users require roll-up status information at the subsystem- and vehicle-level, enabling users to “follow the trail” and more rapidly find the applicable detailed telemetry required to resolve anomalies;
- Proactive Decision Support: Current stove-piped operations often require emails and phone calls to obtain information to support decision making. Acknowledging that there will likely be a central source of data in future operations, users require numerous proactive decision support capabilities (e.g., tool-tips on-hover or on-click actions) to provide operators with additional information to answer the follow-on questions they are most likely to have.
2.2. User Testing Methods
2.2.1. Participants
2.2.2. Apparatus
2.2.3. Dependent Measures
2.2.4. Performance and SA
2.2.5. Workload
2.2.6. Usability
2.2.7. Scenario and Tasks
- State of Health (SOH): the simplest SATC2 scenario, where the participant checks telemetry on the vehicle, and conducts no commanding;
- Nominal commanding: an intermediate SATC2 scenario, where the participant conducts commanding for a standard task known a priori;
- Anomalous commanding: the most difficult SATC2 scenario, where the participant must identify and respond to an anomaly that was not known a priori.
3. Results
3.1. Performance Results
3.2. SA Results
3.3. Workload Results
3.4. Usability
4. Discussion
5. Conclusions
- Of the 72 test contact supports conducted during user testing, the only two failed supports were anomalous commanding passes (highest difficulty) with the legacy HMI. All passes conducted with AESOP, regardless of participant experiences, were successful;
- Participants were able to detect, diagnose, and resolve anomalies in a significantly lower time with AESOP than with the legacy HMI. On average, anomaly resolution times were 64% shorter with AESOP than they were with the legacy HMI;
- On average, participants experienced less workload with AESOP than they did with the legacy HMI. Although SOH passes showed a negligible difference, the difference was statistically significant for nominal commanding, anomalous commanding, and overall workload (the mean workload of all three pass types);
- AESOP is more transferable to new operators than the legacy HMI. Participants with zero experience with the test system experienced significantly less workload with AESOP than they did with the legacy HMI on nominal commanding, anomalous commanding, and overall workload (the mean workload of all three pass types);
- Participants rated AESOP as being significantly more usable than the legacy HMI, rating it approximately 30 points higher than the legacy HMI on average (on a 100-point scale).
Limitations and Future Work
- Because task flows with AESOP were fundamentally different from those with the legacy HMI, it was infeasible to design and implement the software to track task completion times for all tasks in such a manner that they could be validly compared. In future work we will aim to capture task completion times for more than just anomaly resolution, enabling more detailed assessment of how workload is being reduced.
- There was a ceiling effect in the task success measures due to the test scenarios being too simple. This ceiling effect resulted in decreased diagnosticity of performance and subjective workload measures. In future studies we will create a more difficult set of scenarios such as multiple concurrent contacts, which could de-clutter several of the measures used in this test event and provide a more valid “stress test” of the HMIs.
- While we were able to assess transferability by studying novice operators, we could do more to explicitly address this phenomenology. In future work we will seek to test operators on a different satellite system (not just a new HMI) to assess the degree to which AESOP supports transferability across systems.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
AESOP | Adaptable Environment for Space Operations |
ANOVA | Analysis of Variance |
CTA | Cognitive Task Analysis |
DOD | Department of Defense |
DT | Design Thinking |
EGS | Enterprise Ground Services |
F | ANOVA Test Statistic |
HFE | Human Factors Engineering |
HMI | Human–Machine Interface |
KE | Knowledge Elicitation |
K-S | Kolmogorov–Smirnov Test |
M | Mean |
Mdn | Median |
MSVP | Mission-Specific Vendor Plugin |
MTTL | Master Training Task List |
NASA-TLX | National Aeronautics and Space Administration Task Load Index |
SA | Situation Awareness |
SAGAT | Situation Awareness Global Assessment Test |
SATC2 | Satellite Command and Control |
SD | Standard Deviation |
SOH | State of Health |
SST | Standard Space Trainer |
SUS | System Usability Scale |
TIDE | Targeted Ideation Development Event |
TDFA | Top-Down Functional Analysis |
TO | Technical Order |
U | Mann–Whitney Test Statistic |
UCD | User-Centered Design |
USAF | United States Air Force |
References
- Henry, C. DOD Prepares for Overhaul of Military Ground Systems. Satellite Today; 2015. Available online: https://www.satellitetoday.com/government-military/2015/09/14/dod-prepares-for-overhaul-of-military-ground-systems/ (accessed on 5 October 2021).
- Kolodziejski, P.J.; Bille, M.; Quinonez, E. Enabling the Air Force space enterprise vision through small satellites and rapid acquisition. In Proceedings of the 2018 AIAA SPACE and Astronautics Forum and Exposition, Orlando, FL, USA, 17–19 September 2018. [Google Scholar]
- Tadjdeh, Y. Training the space force: How the military will prepare for future battles. Natl. Def. 2018, 103, 30–33. [Google Scholar]
- Straight, C.; Manship, A.; Rexach, C.; Abrecht, B.; Garlisi, C.; Rosario, M. User-Defined Operational Picture (UDOP): Development Vision [UNCLASSIFIED//DISTRIBUTION D]. 2017. [Google Scholar]
- Emerson, N. US Space Force to Train Space Professionals in Space Warfighting Disciplines. Available online: https://www.spaceforce.mil/News/Article/2198012/us-space-force-to-train-space-professionals-in-space-warfighting-disciplines/ (accessed on 5 October 2021).
- United States Government Accountability Office. DoD Space Acquisitions: Including Users Early and Often in Software Development Could Benefit Programs. 2019. Available online: https://www.gao.gov/assets/700/697617.pdf. (accessed on 14 October 2021).
- Vollmer, J.; Atkinson, M. The importance of flight operations involvement during the early phases of the systems development lifecycle for enterprise multi-mission ground system upgrades. In Proceedings of the 2018 SpaceOps Conference, Marseille, France, 28 May–1 June 2018. [Google Scholar]
- Narva, M.A.; Muckler, F.A. Visual surveillance and reconnaissance from space vehicles. Hum. Factors 1963, 5, 295–315. [Google Scholar] [CrossRef] [PubMed]
- Dorton, S.L.; Ganey, H.C.N.; Mintman, E.; Mittu, R.; Smith, M.A.B.; Winters, J. Human-centered alphabet soup: Approaches to systems development from related disciplines. In Proceedings of the 2021 HFES 65th International Annual Meeting, Baltimore, MD, USA, 4–7 October 2021. in press. [Google Scholar]
- Ballew, T.; Bartha, M.C.; Harper, C.; Holmes, D.; Kruithof, P.; Meingast, M. UX & HF: The state of the union. In Proceedings of the 2020 HFES 64th International Annual Meeting, Virtual, 5–9 October 2020; Volume 64, pp. 568–576. [Google Scholar] [CrossRef]
- Wogalter, M.S.; Hancock, P.A.; Dempsey, P.G. On the description and definition of human factors/ergonomics. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Chicago, IL, USA; 1998; Volume 42, pp. 1807–1811. [Google Scholar] [CrossRef]
- Brown, T. Change by Design; Harper Business: New York, NY, USA, 2009. [Google Scholar]
- Liedtka, J. Evaluating the Impact of Design Thinking in Action; Darden Working Paper Series; University of Virginia: Charlottesville, VA, USA, 2018; pp. 1–48. [Google Scholar]
- Dorton, S.L.; Maryeski, L.R.; Ogren, L.; Dykens, I.D.; Main, A. A wargame-augmented knowledge elicitation method for the agile development of novel systems. Systems 2020, 8, 27. [Google Scholar] [CrossRef]
- Rott, J.; Weizler, J.; Rabl, A.; Sandl, P.; Wei, M.; Vogel-Heuser, B. Integrating hierarchical task analysis into model-based system design using Airbus XHTA and IBM Rational Rhapsody. In Proceedings of the IEEE International Conference on Industrial Engineering and Engineering Management, Bangkok, Thailand, 16–19 December 2018. [Google Scholar]
- Lercel, D.; Andrews, D.H. Cognitive task analysis of unmanned aircraft system pilots. Int. J. Aerosp. Psychol. 2021, 31, 319–342. [Google Scholar]
- Wei, L.; He, L.; Liu, Y. Study of artificial intelligence flight co-pilot speech recognition technology. In Proceedings of the 2020 IEEE 2nd ICCASIT, Weihai, China, 14–16 October 2020; pp. 681–685. [Google Scholar]
- Doran, H.D.; Reif, M.; Oehler, M.; Stohr, C. Conceptual design of human-drone communication in collaborative environments. In Proceedings of the 50th International Conference on DSN-W, Valencia, Spain, 29 June–2 July 2020; pp. 118–121. [Google Scholar]
- Blanchard, B.S.; Fabrycky, W.J. Systems Engineering and Analysis, 4th ed.; Prentice Hall: Hoboken, NJ, USA, 2006. [Google Scholar]
- Buede, D. The Engineering Design of Systems Models and Methods, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
- Lockett, J.F.; Powers, J. Human Factors Engineering Methods and Tools. In Handbook of Human Systems Integration; Booher, H.R., Ed.; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
- Vicente, K.L. Cognitive Work Analysis: Toward Safe, Productive, and Healthy Computer-Based Work; Lawrence Erlbaum Associates: Hillsdale, NJ, USA, 1999. [Google Scholar]
- Schraagen, J.M.; Chipman, S.F.; Shalin, V.L. Introduction to cognitive task analysis. In Cognitive Task Analysis; Shraagen, J.M., Chipman, S.F., Shalin, V.L., Eds.; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 2000. [Google Scholar]
- Dorton, S.L.; Frommer, I.D.; Garrison, T.M. A theoretical model for assessing information validity from multiple observers. In Proceedings of the 2019 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA), Las Vegas, NV, USA, 8–11 April 2019; pp. 62–68. [Google Scholar] [CrossRef]
- Johnson, R.B. Examining the validity structure of qualitative research. Education 1997, 118, 282–292. [Google Scholar]
- Sonlaysts, Inc. Adaptable Environment for Space Operations (AESOP) R&D Highlight. 2020. Available online: https://www.youtube.com/watch?v=C4j2I13fKaY. (accessed on 14 October 2021).
- Dorton, S.; Thirey, M. Effective variety? For whom (or what)? In Proceedings of the 2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA), Savannah, GA, USA, 27–31 March 2017. [Google Scholar]
- Endsley, M.R. Toward a theory of situation awareness in dynamic systems. Hum. Factors J. HFES 1995, 37, 32–64. [Google Scholar] [CrossRef]
- Endsley, M.R.; Jones, D.G. Designing for Situation Awareness: An Approach to User-Centered Design, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
- Endsley, M.R. Direct measurement of situation awareness: Validity and use of SAGAT. In Situation Awareness Analysis and Measurement; Endlsey, M.R., Garland, D., Eds.; Lawrence Erlbaum: Mahwah, NJ, USA, 2000. [Google Scholar]
- Hart, S.; Staveland, L. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research; Hancock, P., Meshkati, N., Eds.; Human Mental Workload; Elsevier: Amsterdam, The Netherlands, 1988; pp. 139–183. [Google Scholar]
- Grier, R.A. How high is high? A meta-analysis of NASA-TLX global workload scores. In Proceedings of the HFES 59th Human Factors and Ergonomics Society Annual Meeting, Los Angeles, CA, USA, 26–30 October 2015; pp. 1727–1731. [Google Scholar]
- Rubin, J.; Chisnell, D. Handbook of Usability Testing, 2nd ed.; Wiley: Indianapolis, IN, USA, 2008. [Google Scholar]
- Brooke, J. SUS: A “quick and dirty” usability scale. In Usability Evaluation in Industry; Jordan, P.W., Thomas, B., Werdmeester, B.A., McClelland, I.L., Eds.; Taylor & Francis: London, UK, 1996; pp. 189–194. [Google Scholar]
- Bangor, A.; Kortum, P.T.; Miller, J.T. An empirical evaluation of the system usability scale. J. Hum. Comput. Interact. 2008, 24, 574–594. [Google Scholar] [CrossRef]
- Lewis, J.R.; Sauro, J. The factor structure of the system usability scale. In Human Centered Design; Springer: Berlin/Heidelberg, Germany, 2009; pp. 94–103. [Google Scholar]
- Peterson, D.A.; Kozhokar, D. Peak-end effects for subjective mental workload ratings. In Proceedings of the Human Factors and Ergonomics Society 2017 Annual Meeting, Austin, TX, USA, 9–13 October 2017; pp. 2052–2056. [Google Scholar]
- Field, A. Discovering Statistics Using SPSS, 3rd ed.; Sage Publications: Los Angeles, CA, USA, 2009. [Google Scholar]
- Bangor, A.; Kortum, P.; Miller, J. Determining what individual SUS scores mean: Adding an adjective rating scale. J. Usability Stud. 2009, 4, 114–123. [Google Scholar]
- Bell, B.M.; Rogers, E.T. Space resilience and the contested, degraded, and operationally limited environment: The gaps in tactical space operations. Air Space Power J. Nov.-Dec. 2014, 28, 130–147. [Google Scholar]
Method | Outcomes |
---|---|
Top-Down Functional Analysis (TDFA) and Cognitive Task Analysis (CTA) |
|
Rapid Prototyping (Wireframes, etc.) |
|
Knowledge Elicitation (KE) Interviews |
|
Targeted Ideation Development Event (TIDE) |
|
Focus Groups |
|
Demographic | M1 (SD) | Min | Max |
---|---|---|---|
Total SATC2 Experience (Months) | 32.75 (17.33) | 12 | 66 |
Total Test System Experience (Months) 2 | 19.00 (10.71) | 4 | 30 |
Number of HMIs Used | 1.25 (0.62) | 1 | 3 |
# of Successful Supports 1 (%) | ||
---|---|---|
Pass Type | Legacy | AESOP |
SOH | 12 (100%) | 12 (100) |
Nominal Commanding | 12 (100%) | 12 (100) |
Anomalous Commanding | 10 (83%) | 12 (100) |
Legacy | AESOP | Difference 1 | |||||
---|---|---|---|---|---|---|---|
Pass Type | M (SD) | Mdn | M (SD) | Mdn | Stat | Value | p |
SOH | 11.81 (7.58) | 8.67 | 8.81 (4.28) | 7.34 | F | 1.43 | 0.25 |
Nominal | 16.44 (9.65) | 13.33 | 11.03 (7.99) | 7.50 | U | 40.50 | 0.03 |
Anomalous | 32.25 (13.79) | 33.17 | 16.05 (17.11) | 11.17 | U | 21.50 | 0.00 |
Overall TLX | 20.17 (8.33) | 17.61 | 11.98 (8.48) | 9.00 | F | 5.69 | 0.01 |
Mean (SD) | ANOVA (One-Way) 1 | ||||
---|---|---|---|---|---|
Measure | Legacy | AESOP | F(2,10) | p | η2 |
SOH Workload | 11.56 (10.46) | 6.61 (1.42) | 1.32 | 0.28 | 0.12 |
Nominal Workload | 19.50 (11.97) | 6.50 (1.76) | 6.93 | 0.03 | 0.41 |
Anomalous Workload | 31.94 (17.79) | 10.33 (6.56) | 7.80 | 0.02 | 0.44 |
Overall Workload | 21.00 (11.45) | 7.82 (2.72) | 7.53 | 0.02 | 0.43 |
Numerical SUS Rating | Adjective SUS Ratings | |||||||
---|---|---|---|---|---|---|---|---|
HMI | M | Mdn | SD | Min | Max | Acceptability | Grade | Adjective |
Legacy | 56.02 | 57.50 | 22.40 | 25.00 | 90.00 | Marginal (Low) | F | OK |
AESOP | 83.54 | 87.50 | 10.41 | 62.50 | 95.00 | Acceptable | B | Excellent |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dorton, S.L.; Maryeski, L.R.; Costello, R.P.; Abrecht, B.R. A Case for User-Centered Design in Satellite Command and Control. Aerospace 2021, 8, 303. https://doi.org/10.3390/aerospace8100303
Dorton SL, Maryeski LR, Costello RP, Abrecht BR. A Case for User-Centered Design in Satellite Command and Control. Aerospace. 2021; 8(10):303. https://doi.org/10.3390/aerospace8100303
Chicago/Turabian StyleDorton, Stephen L., LeeAnn R. Maryeski, Robert P. Costello, and Blake R. Abrecht. 2021. "A Case for User-Centered Design in Satellite Command and Control" Aerospace 8, no. 10: 303. https://doi.org/10.3390/aerospace8100303
APA StyleDorton, S. L., Maryeski, L. R., Costello, R. P., & Abrecht, B. R. (2021). A Case for User-Centered Design in Satellite Command and Control. Aerospace, 8(10), 303. https://doi.org/10.3390/aerospace8100303