Learnability in Automated Driving (LiAD): Concepts for Applying Learnability Engineering (CALE) Based on Long-Term Learning Effects
Abstract
:1. Introduction
1.1. Motivation
1.2. Contribution
- Identify core operational concepts in order to consider the contributions, implications and boundaries for researching ‘learnability in the context of automated driving’,
- Propose a working definition for learnability that underpins long-term research as a way to advocate for the consequence of long-term effects on user behaviour.
- Section 1 (Introduction): An introductory discussion of the topic of interest.
- Section 2 (‘What Is Already Known’): A literature overview on learnability is discussed.
- Section 3 (‘What This Paper Adds’): Engineering knowledge that underpins the operationalisation of the concept. Furthermore, we elaborate on how automated driving engineers and researchers may need to employ extended assessment tools when designing for lifelong behaviour-based safety.
2. What Is Already Known?
2.1. Learnability
- Learnability is a function of the user’s experience, i.e., it depends on the type of user for which the learning occurs;
- Learnability can be evaluated on a single usage period (initial learnability) or after several usages (extended learnability);
- Learnability measures the performance of user interaction in terms of completion times, error and success rates or percentage of functionality understood.
- (1)
- Group A consider ease-of-learning to be a sub-concept of ease-of-use.
- (2)
- Group B sees ease-of-learning and ease-of-use as competing attributes that can seldom be fulfilled at the same time.
- (3)
- Group C sees ease-of-learning as an attribute that covers the whole usage process (e.g., a system can be easy-to-learn for beginners; however, for experienced users, it may constantly assist users in finding new and more ways to use).
2.2. Theories of Learning and Behaviour
- Situated in a somewhat vague conceptual field, as well as connected and intertwined by information processing and knowledge acquisition;
- Learning theories represent a fuzzy mixture of principles and applications.
2.3. “X about X”: Linking Learning Effects and BAC
- Immediately, or
- Only after a short delay (e.g., one hour), or
- Even after a long delay (e.g., one year) [66].
- On the experimental side, “it is argued that a single law, the power law of practise, adequately describes all of the practise data.
- On the theoretical side, “a model of practise rooted in modern cognitive psychology, the chunking theory of learning, is formulated”.
- (1)
- Cognitive stage: learner stores to memory (memorisation) a set of details relevant to the skill,
- (2)
- Associative stage: learner forms the details into a procedural model (contains step-by-step instructions for performing a certain action),
- (3)
- Autonomous phase: the procedure becomes more automated and rapid and, in the end, requires very little processing resources.
3. What This Paper Adds? The Learnability Engineering Life Cycle
3.1. Conceptualisation
- Our conceptualisation:
- ○
- LiAD infers a characteristic of the user’s quality of learning, taking into account the easiness of learning-to-use automation systems within the spectrum of short-term or initial learnability (initial LiAD: [i]LiAD) to long-term or extended learnability (extended LiAD: [e]LiAD). It can be measured based on learning curves (considering its slope and its plateau) and behaviour modification that emphasises a scale of desirable (efficient, effective, satisfying, error-free and safety-taking behaviours with the system) and undesirable (inefficient/deficient, ineffective, unsatisfying, error-prone and risk-taking behaviours) behaviours as a result of learning effects/long-term effects.
3.2. Operationalisation
- Our operationalisation:
- ○
- CALE aligned to LiAD research considers learnability based on the following: the nature/purpose/objectives of the research or design (for what), the research or design topics (to what), the agents/entities of interest in the research or design (of what) and the design or research setting and methods (through what). These represent important fundamentals for operationalising LiAD research. Scientifically exploring these questions provides rich knowledge about the quality aspects of LiAD.
3.2.1. Operational Concept I: Learnability for What?
3.2.2. Operational Concept II: Learnability to What?
3.2.3. Operational Concept III: Learnability of What?
3.2.4. Operational Concept IV: Learnability through What?
3.3. Contextualisation
Theorising Effects: A Theory of Effects
- First phase: The user learns how to operate the system, identifies system limits, and internalises the system functionality, and this learning phase can be influenced by how the automation system is introduced to the user [75].
- Second phase: The “integration phase”, the driver integrates the system into the management of the overall operating task via increasing experience in different situations [76].
3.4. Systematisation of LiAD and Multi-Level Long-Term Research
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
ACC | Adaptive Cruise Control |
ADS | Automated Driving System |
AI | Artificially Intelligent |
BAC | Behaviour Adaptability and/or Changeability |
CALE | Concepts for Applying Learnability Engineering |
ESM | Experience Sampling Methods |
HAI | Human-Automation Interaction |
HAC | human-Automation Compatibility |
HCI | Human–Computer Interaction |
HMI | Human–Machine Interface |
LiAD | Learnability in Automated Driving |
OEM | Original Equipment Manufacturer |
PUMS | Programmable User Models |
QAD | Quality Automated Driving |
QinU | Quality in Usability |
QinL | Quality in Learnability |
UX | User Experience |
ISO | International Standard Organisation |
[i]LiAD | Initial Learnability in Automated Driving (initial LiAD) |
[e]LiAD | Extended Learnability in Automated Driving (extended LiAD) |
S-RC | Stimulus-Response Compatibility |
References
- SAE International. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (J3016_202104); SAE International: Warrendale, PA, USA, 2021. [Google Scholar]
- Kyriakidis, M.; de Winter, J.C.F.; Stanton, N.; Bellet, T.; van Arem, B.; Brookhuis, K.; Martens, M.H.; Bengler, K.; Andersson, J.; Merat, N.; et al. A human factors perspective on automated driving. Theor. Issues Ergon. Sci. 2019, 20, 223–249. [Google Scholar] [CrossRef]
- Körber, M.; Baseler, E.; Bengler, K. Introduction matters: Manipulating trust in automation and reliance in automated driving. Appl. Ergon. 2018, 66, 18–31. [Google Scholar] [CrossRef] [PubMed]
- Lee, J.D.; See, K.A. Trust in automation: Designing for appropriate reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef] [PubMed]
- Mbelekani, N.Y.; Bengler, K. Learning Design Strategies for Optimizing User Behaviour towards Automation: Architecting Quality Interactions from Concept to Prototype. In Proceedings of the International Conference on Human-Computer Interaction, Copenhagen, Denmark, 23–28 July 2023; Springer Nature: Cham, Switzerland, 2023; pp. 90–111. [Google Scholar]
- Heimgärtner, R. Human Factors of ISO 9241-110 in the Intercultural Context. Adv. Ergon. Des. Usability Spec. Popul. Part 2014, 3, 18. [Google Scholar]
- Rudin-Brown, C.M.; Parker, H.A.; Malisia, A.R. Behavioral adaptation to adaptive cruise control. In Proceedings of the Human Factors and Ergonomics Society 47th Annual Meeting, Denver, CO, USA, 13–17 October 2003; SAGE Publications: Los Angeles, CA, USA, 2003; Volume 47, pp. 1850–1854. [Google Scholar]
- Hoffman, R.R.; Marx, M.; Amin, R.; McDermott, P.L. Measurement for evaluating the learnability and resilience of methods of cognitive work. Theor. Issues Ergon. Sci. 2010, 11, 561–575. [Google Scholar] [CrossRef]
- Butler, K.A. Connecting theory and practice: A case study of achieving usability goals. ACM SIGCHI Bull. 1985, 16, 85–88. [Google Scholar] [CrossRef]
- Albert, B.; Tullis, T. Measuring the User Experience: Collecting, Analyzing, and Presenting UX Metrics; Morgan Kaufmann: Burlington, MA, USA, 2022. [Google Scholar]
- Oliver, M. Learning technology: Theorising the tools we study. Br. J. Educ. Technol. 2013, 44, 31–43. [Google Scholar] [CrossRef]
- Joyce, A. How to Measure Learnability of a User Interface; Nielsen Norman Group: Dover, DE, USA, 2019. [Google Scholar]
- Grossman, T.; Fitzmaurice, G.; Attar, R. A survey of software learnability: Metrics, methodologies and guidelines. In Proceedings of the Sigchi Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; pp. 649–658. [Google Scholar]
- Stickel, C.; Fink, J.; Holzinger, A. Enhancing universal access–EEG based learnability assessment. In Universal Access in Human-Computer Interaction. Applications and Services: 4th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2007 Held as Part of HCI International 2007, Beijing, China, 22–27 July 2007; Proceedings, Part III 4; Springer: Berlin/Heidelberg, Germany, 2007; pp. 813–822. [Google Scholar]
- Haramundanis, K. Learnability in information design. In Proceedings of the 19th Annual International Conference on Computer Documentation, Sante Fe, NM, USA, 21–24 October 2001; pp. 7–11. [Google Scholar]
- Howes, A.; Young, R.M. Predicting the Learnability of Task-Action Mappings. The Soar Papers (Vol. II): Research on Integrated Intelligence; MIT Press: Cambridge, MA, USA, 1993; pp. 1204–1209. [Google Scholar]
- Burton, J.K.; Moore, D.M.M.; Magliaro, S.G. Behaviorism and instructional technology. In Handbook of Research for Educational Communications and Technology, 2nd ed.; Jonassen, D.H., Ed.; Lawrence Erlbaum Associates, Publishers: London, UK, 2004; pp. 46–73. [Google Scholar]
- Nielsen, J. Usability Engineering; Academic Press: Boston, MA, USA, 1994. [Google Scholar]
- Holzinger, A. Usability engineering methods for software developers. Commun. ACM 2005, 48, 71–74. [Google Scholar] [CrossRef]
- Carroll, J.M.; Carrithers, C. Training wheels in a user interface. Commun. ACM 1984, 27, 800–806. [Google Scholar] [CrossRef]
- Bevan, N.; Macleod, M. Usability measurement in context. Behav. Inf. Technol. 1994, 13, 132–145. [Google Scholar] [CrossRef]
- Michelsen, C.D.; Dominick, W.D.; Urban, J.E. A methodology for the objective evaluation of the user/system interfaces of the MADAM system using software engineering principles. In Proceedings of the 18th Annual Southeast Regional Conference, Tallahassee, FL, USA, 24–26 March 1980; pp. 103–109. [Google Scholar]
- Abran, A.; Khelifi, A.; Suryn, W.; Seffah, A. Usability meanings and interpretations in ISO standards. Softw. Qual. J. 2003, 11, 325–338. [Google Scholar] [CrossRef]
- Nielsen, M.B.; Winkler, B.; Colonel, L. Addressing Future Technology Challenges through Innovation and Investment; Air University: Islamabad, Pakistan, 2012. [Google Scholar]
- Marrella, A.; Catarci, T. Measuring the learnability of interactive systems using a Petri Net based approach. In Proceedings of the 2018 Designing Interactive Systems Conference, Hong Kong, China, 9–13 June 2018; pp. 1309–1319. [Google Scholar]
- Shamsuddin, N.A.; Sulaiman, S.; Syed-Mohamad, S.M.; Zamli, K.Z. Improving learnability and understandability of a web application using an action-based technique. In Proceedings of the 2011 Malaysian Conference in Software Engineering, Johor Bahru, Malaysia, 13–14 December 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 245–250. [Google Scholar]
- Santos, P.J.; Albert, N.B. Discount Learnability Evaluation; Graphics, Visualization & Usability Center, Georgia Institute of Technology: Atlanta, GA, USA, 1995. [Google Scholar]
- Rieman, J. A field study of exploratory learning strategies. ACM Trans. Comput.-Hum. Interact. (TOCHI) 1996, 3, 189–218. [Google Scholar] [CrossRef]
- Goguen, J.A. Toward a Social, Ethical Theory of Information. In Social Science, Technical Systems, and Co-Operative Work; Bowker, G.C., Star, S.L., Turner, W., Eds.; Psychology Press: London, UK, 1997. [Google Scholar]
- Hoffman, R.R.; Elm, W.C. HCC implications for the procurement process. IEEE Intell. Syst. 2006, 21, 74–81. [Google Scholar] [CrossRef]
- Neville, K.; Hoffman, R.R.; Linde, C.; Elm, W.C.; Fowlkes, J. The procurement woes revisited. IEEE Intell. Syst. 2008, 23, 72–75. [Google Scholar] [CrossRef]
- Bruun, A. Training software developers in usability engineering: A literature review. In Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, Reykjavik, Iceland, 16–20 October 2010; pp. 82–91. [Google Scholar]
- Grudin, J. Utility and usability: Research issues and development contexts. Interact. Comput. 1992, 4, 209–217. [Google Scholar] [CrossRef]
- Mayhew, D.J. The usability engineering lifecycle. In Proceedings of the CHI’99 Extended Abstracts on Human Factors in Computing Systems, Pittsburgh, PA, USA, 15–20 May 1999; pp. 147–148. [Google Scholar]
- Nielsen, J. The usability engineering life cycle. Computer 1992, 25, 12–22. [Google Scholar] [CrossRef]
- Sohaib, O.; Khan, K. Integrating usability engineering and agile software development: A literature review. In Proceedings of the 2010 International Conference on Computer Design and Applications, Qinhuangdao, China, 25–27 June 2010; IEEE: Piscataway, NJ, USA, 2010; Volume 2, pp. V2–V32. [Google Scholar]
- Schaffer, E. Institutionalization of Usability: A Step-by-Step Guide; Addison-Wesley Professional: Boston, MA, USA, 2004. [Google Scholar]
- Schaffer, E.; Lahiri, A. Institutionalization of UX: A Step-by-Step Guide to a User Experience Practice; Addison-Wesley: Boston, MA, USA, 2013. [Google Scholar]
- Mack, R.; Robinson, J.B. When novices elicit knowledge: Question asking in designing, evaluating, and learning to use software. In The Psychology of Expertise: Cognitive Research and Empirical AI; Springer: New York, NY, USA, 1992; pp. 245–268. [Google Scholar]
- Beggiato, M.; Pereira, M.; Petzoldt, T.; Krems, J. Learning and development of trust, acceptance and the mental model of ACC. A longitudinal on-road study. Transp. Res. Part F Traffic Psychol. Behav. 2015, 35, 75–84. [Google Scholar] [CrossRef]
- Haritos, T. A Study of Human-Machine Interface (HMI) Learnability for Unmanned Aircraft Systems Command and Control. Ph.D. Thesis, Nova Southeastern University, Fort Lauderdale, FL, USA, 2017. [Google Scholar]
- Ojeda, L.; Nathan, F. Studying learning phases of an ACC through verbal reports. In Driver Support and Information Systems: Experiments on Learning, Appropriation and Effects of Adaptiveness; Brouwer, R., Hoedemaeker, D.M., Eds.; 2006; pp. 47–73. [Google Scholar]
- Sauro, J.; Lewis, J.R. Standardized usability questionnaires. In Quantifying the User Experience: Practical Smartphone App for Anxious Youth 83 Statistics for User Research; Elsevier: Waltham, MA, USA, 2012; pp. 198–212. [Google Scholar]
- Weinberger, M.; Winner, H.; Bubb, H. Adaptive cruise control field operational test—The learning phase. JSAE Rev. 2001, 22, 487–494. [Google Scholar] [CrossRef]
- Simon, J.H. Learning to Drive with Advanced Driver Assistance Systems. Empirical Studies of an Online Tutor and a Personalised Warning Display on the Effects of Learnability and the Acquisition of Skill. Ph.D. Thesis, Philosophische Fakultät, Technische Universität Chemnitz, Chemnitz, Germany, 2005. [Google Scholar]
- Elliott, G.J.; Jones, E.; Barker, P. A grounded theory approach to modelling learnability of hypermedia authoring tools. Interact. Comput. 2002, 14, 547–574. [Google Scholar] [CrossRef]
- Kato, T. What “question-asking protocols” can say about the user interface. Int. J. Man-Mach. Stud. 1986, 25, 659–673. [Google Scholar] [CrossRef]
- Mack, R.L.; Lewis, C.H.; Carroll, J.M. Learning to use word processors: Problems and prospects. ACM Trans. Inf. Syst. (TOIS) 1983, 1, 254–271. [Google Scholar] [CrossRef]
- Unsöld, M. Measuring Learnability in Human-Computer Interaction. Ph.D. Thesis, Ulm University, Ulm, Germany, 2018. [Google Scholar]
- Davis, S.; Wiedenbeck, S. The effect of interaction style and training method on end user learning of software packages. Interact. Comput. 1998, 11, 147–172. [Google Scholar] [CrossRef]
- Rafique, I.; Weng, J.; Wang, Y.; Abbasi, M.Q.; Lew, P.; Wang, X. Evaluating software learnability: A learnability attributes model. In Proceedings of the 2012 International Conference on Systems and Informatics (ICSAI2012), Yantai, China, 19–20 May 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 2443–2447. [Google Scholar]
- Linja-aho, M. Evaluating and Improving the Learnability of a Building Modeling System. Master’s Thesis, Helsinki University of Technology, Helsinki, Finland, 2005. [Google Scholar]
- Baecker, R.; Booth, K.; Jovicic, S.; McGrenere, J.; Moore, G. Reducing the gap between what users know and what they need to know. In Proceedings of the 2000 Conference on Universal Usability, Arlington, VA, USA, 16–17 November 2000; pp. 17–23. [Google Scholar]
- Paymans, T.F.; Lindenberg, J.; Neerincx, M. Usability trade-offs for adaptive user interfaces: Ease of use and learnability. In Proceedings of the 9th International Conference on Intelligent User Interfaces, Funchal, Portugal, 13–16 January 2004; pp. 301–303. [Google Scholar]
- Kieras, D.; Polson, P.G. An approach to the formal analysis of user complexity. Int. J. Man-Mach. Stud. 1985, 22, 365–394. [Google Scholar] [CrossRef]
- Brown, P.C.; Roediger, H.L., III; McDaniel, M.A. Make It Stick: The Science of Successful Learning; Harvard University Press: Cambridge, MA, USA, 2014. [Google Scholar]
- Lieberman, D.A. Human Learning and Memory; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
- Lowyck, J. Bridging learning theories and technology-enhanced environments: A critical appraisal of its history. In Handbook of Research on Educational Communications and Technology; Springer Science & Business Media: New York, NY, USA, 2014; pp. 3–20. [Google Scholar]
- Homans, H.; Radlmayr, J.; Bengler, K. Levels of driving automation from a user’s perspective: How are the levels represented in the user’s mental model? In Human Interaction and Emerging Technologies: Proceedings of the 1st International Conference on Human Interaction and Emerging Technologies (IHIET 2019), Nice, France, 22–24 August 2019; Springer International Publishing: Cham, Switzerland, 2020; pp. 21–27. [Google Scholar]
- Ko, A.J.; Myers, B.A.; Aung, H.H. Six learning barriers in end-user programming systems. In Proceedings of the 2004 IEEE Symposium on Visual Languages-Human Centric Computing, Rome, Italy, 26–29 September 2004; IEEE: Piscataway, NJ, USA, 2004; pp. 199–206. [Google Scholar]
- Allen, R.B. Mental models and user models. In Handbook of Human-Computer Interaction, 2nd ed.; Helander, M., Landauer, T.K., Prabhu, P., Eds.; Elsevier Science B.V.: North-Holland, The Netherland, 1997; pp. 49–63. [Google Scholar]
- Beggiato, M.; Krems, J.F. The evolution of mental model, trust and acceptance of adaptive cruise control in relation to initial information. Transp. Res. Part F Traffic Psychol. Behav. 2013, 18, 47–57. [Google Scholar] [CrossRef]
- Wozney, L.; McGrath, P.J.; Newton, A.; Huguet, A.; Franklin, M.; Perri, K.; Leuschen, K.; Toombs, E.; Lingley-Pottie, P. Usability, learnability and performance evaluation of Intelligent Research and Intervention Software: A delivery platform for eHealth interventions. Health Inform. J. 2016, 22, 730–743. [Google Scholar] [CrossRef]
- Ertmer, P.A.; Newby, T.J. Behaviorism, cognitivism, constructivism: Comparing critical features from an instructional design perspective. Perform. Improv. Q. 2013, 26, 43–71. [Google Scholar] [CrossRef]
- Bell, F. Connectivism: Its place in theory-informed research and innovation in technology-enabled learning. Int. Rev. Res. Open Distrib. Learn. 2011, 12, 98–118. [Google Scholar] [CrossRef]
- De Houwer, J.; Hughes, S. The Psychology of Learning: An Introduction from a Functional-Cognitive Perspective; MIT Press: Cambridge, MA, USA, 2020. [Google Scholar]
- Pritchard, A. Ways of Learning: Learning Theories and Learning Styles in the Classroom; Routledge: London, UK, 2009. [Google Scholar]
- Lyons, J. Learning with technology: Theoretical foundations underpinning simulations in higher education. In Proceedings of the Future Challenges, Sustainable Futures, Proceedings ASCILITE, Wellington, New Zealand, 25–28 November 2012; pp. 582–586. [Google Scholar]
- Balakrishnan, V.; Gan, C.L. Students’ learning styles and their effects on the use of social media technology for learning. Telemat. Inform. 2016, 33, 808–821. [Google Scholar] [CrossRef]
- Newell, A.; Rosenbloom, P.S. Mechanisms of skill acquisition and the law of practice. In Cognitive Skills and Their Acquisition; Psychology Press: London, UK, 2013; pp. 1–55. [Google Scholar]
- Mbelekani, N.Y.; Bengler, K. Interdisciplinary Industrial Design Strategies for Human-Automation Interaction: Industry Experts’ Perspectives. Interdiscip. Pract. Ind. Des. 2022, 48, 132–141. [Google Scholar]
- Simonet, S.; Wilde, G.J.S. Risk: Perception, acceptance and homeostasis. Appl. Psychol. Int. Rev. 1997, 46, 235–252. [Google Scholar] [CrossRef]
- Linja-aho, M. Creating a framework for improving the learnability of a complex system. Hum. Technol. 2006, 2, 202–224. [Google Scholar] [CrossRef]
- Forster, Y.; Hergeth, S.; Naujoks, F.; Krems, J.; Keinath, A. User education in automated driving: Owner’s manual and interactive tutorial support mental model formation and human-automation interaction. Information 2019, 10, 143. [Google Scholar] [CrossRef]
- Metz, B.; Wörle, J.; Hanig, M.; Schmitt, M.; Lutz, A.; Neukum, A. Repeated usage of a motorway automated driving function: Automation level and behavioural adaption. Transp. Res. Part F Traffic Psychol. Behav. 2021, 81, 82–100. [Google Scholar] [CrossRef]
- Saad, F. Some critical issues when studying behavioural adaptations to new driver support systems. Cogn. Technol. Work 2006, 8, 175–181. [Google Scholar] [CrossRef]
- Lin, H.X.; Choong, Y.; Salvendy, G. A proposed index of usability: A method for comparing the relative usability of different software systems. Behav. Inf. Technol. 1997, 16, 267–277. [Google Scholar] [CrossRef]
- Laakkonen, M. Learnability Makes Things Click: A Grounded Theory Approach to the Software Product Evaluation; University of Lapland: Rovaniemi, Finland, 2007. [Google Scholar]
Quality Aspects | Characteristics |
---|---|
Automated system aspects | Hardware and software design, transparency, limitations and capabilities, positive pragmaticity (easy-to-learn, easy-to-understand, easy-to-use, efficient, organised, straight-forward, comprehensive, understandable, useful, convenient, etc.), negative pragmaticity (hard-to-learn, difficult-to-understand, hard-to-use, ineffective, time-consuming, incomprehensible, irrelevant, poor quality, etc.), positive hedonicity (innovative, engaging, creative, inviting, fun, attractive, clean, etc.), negative hedonicity (overwhelming, frustrating, unattractive, complex, intimidating, etc.). |
Subjective aspects | User types, age, gender, attitudes, emotional and cognitive states, personality, skills, user experience (quality of domain knowledge, level of experience with computers, level of experience with interface, and experience with similar software), intentions, trust, etc. |
Internal aspects | Health, fatigue, drowsiness, mental fog, anxiety, stress, etc. |
Learning aspects | Users’ knowledge-seeking process, the learning process and the result of the learning process, the speed and quality of the skill development, type and sequence of effects as a result of their learning abilities. |
External aspects | Distractions, the presence of external stress, weather conditions, road type, environmental factors, the difference between structured and unstructured ubiquitous settings, etc. |
Social aspects | Individual and social learning, context of use, content-driven and process-driven approaches, isolated and integrated use, intra-active and interactive, and fixed tools and unfixed devices, etc. |
Scope | Description | Reference |
---|---|---|
Initial learnability | The initial learning experiences or initial performance of a user interacting with a system. | |
As a novice user’s initial performance with a system after instruction. | [13] | |
“…allows users to reach a reasonable level of usage proficiency within a short time” | [18] | |
“Allowing users to rapidly begin to work with the system” | [19] | |
“The effort required for a typical user to be able to perform a set of tasks using an interactive system with a predefined level of proficiency” | [27] | |
“The time it takes members of the user community to learn how to use the commands relevant to a set of tasks” | [25] | |
Extended learnability | The changes of performances over the sequence of time, and after several interactions with the system. | |
A user’s performance after they have become familiar with the system. | [13] | |
“The ease at which new users can begin effective interaction and achieve maximal performance”, | [25] | |
“Initial user performance based on self-instruction [allowing] experienced users to select an alternate model that involved fewer screens or keystrokes” | [9] | |
“Minimally useful with no formal training, and should be possible to master the software”, | [28] | |
The “quality of use for users over time” | [21] |
Methods/Parameters | Code | Factors | References |
---|---|---|---|
Research Methods | |||
Empirical approach: Strategies based on data collection | E1 | In a lab experiment and survey method: Perform tasks (with a system), and after answering the questionnaire, formative/summative evaluation, and record completion time. | [9,13,46] |
E2 | In a lab experiment and think-aloud method, Novice users interact with the system and vocalise their thoughts. The tutor sits next to them for questions during system use. | [47,48,49] | |
E3 | Naturalistic study and diary method: Interact with the system in their natural habitat and fill diary entries to keep all records of all the learning activities. | ||
E4 | Interviews: Cover subjective data and also for missing or incomplete study entries. | ||
E5 | In a lab experiment and observations: Trained and during test tasks, they are observed on performance. Summative evaluation on performing a task and then evaluated. | [50] | |
E7 | Online survey: Online questionnaire evaluation with questions related to learnability characteristics, task match, and interface understandability. | [51] | |
Assessment Metrics (adapted from [21]) | |||
Task Metrics: Metrics based on task performance | T1 | Percentage of users who complete a task optimally. | [52] |
T2 | Percentage of users who complete a task without any help. | [52] | |
T3 | Ability to complete tasks optimally after a certain timeframe. | [9] | |
T4 | Decrease in task errors made over certain time intervals. | [22] | |
T5 | Time until the user completes a certain task successfully. | [18] | |
T6 | Time until the user completes a set of tasks within a timeframe. | [18] | |
T7 | Quality of work performed during a task, as scored by judges. | [50] | |
Command Metrics: Metrics based on command usage | C1 | Success rate of commands after being trained. | [20] |
C2 | Increase in commands used over certain time intervals. | [18] | |
C3 | Increase in complexity of commands over a time interval. | [18] | |
C4 | Percent of commands known to the user. | [53] | |
C5 | Percent of commands used by user. | [53] | |
Mental Metrics: Metrics based on cognitive processes | M1 | Decrease in average think times over certain time intervals. | [18] |
M2 | Alpha vs. beta waves in EEG patterns during usage. | [14] | |
M3 | Change in chunk size over time. | [27] | |
M4 | Mental Model questionnaire pre-test and post-test results. | [54] | |
Subjective Metrics: Metrics based on user feedback | S1 | Number of learnability-related user comments. | [18] |
S2 | Learnability questionnaire responses. | [42,46] | |
S3 | Twenty-six Likert statements. | [6] | |
Documentation Metrics: Metrics based on documentation usage | D1 | Decrease in help commands used over certain time intervals. | [18] |
D2 | Time taken to review documentation until starting a task. | [18] | |
D3 | Time to complete a task after reviewing documentation. | [18] | |
Usability Metrics: Metrics based on change in usability | U1 | Comparing “quality of use” over time. | [21] |
U2 | Comparing “usability” for novice and expert users. | [21] | |
Rule Metrics: Metrics based on specific rules | R1 | Number of rules required to describe the system. | [16,55] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mbelekani, N.Y.; Bengler, K. Learnability in Automated Driving (LiAD): Concepts for Applying Learnability Engineering (CALE) Based on Long-Term Learning Effects. Information 2023, 14, 519. https://doi.org/10.3390/info14100519
Mbelekani NY, Bengler K. Learnability in Automated Driving (LiAD): Concepts for Applying Learnability Engineering (CALE) Based on Long-Term Learning Effects. Information. 2023; 14(10):519. https://doi.org/10.3390/info14100519
Chicago/Turabian StyleMbelekani, Naomi Y., and Klaus Bengler. 2023. "Learnability in Automated Driving (LiAD): Concepts for Applying Learnability Engineering (CALE) Based on Long-Term Learning Effects" Information 14, no. 10: 519. https://doi.org/10.3390/info14100519
APA StyleMbelekani, N. Y., & Bengler, K. (2023). Learnability in Automated Driving (LiAD): Concepts for Applying Learnability Engineering (CALE) Based on Long-Term Learning Effects. Information, 14(10), 519. https://doi.org/10.3390/info14100519