Domain Usability Evaluation
Abstract
:1. Introduction
1.1. Domain Usability
- Domain content: the interface terms, relations, and processes should match the ones from the domain for which the user interface is designed.
- Consistency: words used throughout the whole interface should not differ—if they describe the same functionality, the dictionary should be consistent.
- The language used in the interface: the language of the interface should be the language of the user, and the particular localization of the UI should be complete, i.e., there should be no foreign words.
- Domain specificity: the interface should not contain too general terms, even if they belong to the target domain. The used terms should be as specific as possible.
- Language barriers and errors: the interface should not create language barriers for the users, and it should not contain language errors.
- Ergonomic aspect: without the proper component placement, design, and ergonomic control, it is not possible to perform tasks effectively.
- Domain aspect: without the proper terminology, it is harder (or not possible at all) to identify the particular features needed to complete the chosen task. This results in total prevention of the task or at the very least, less effective user performance and lower memorability.
1.2. Problem and Motivation
- (i)
- There are no clear rules to design the term structure of an application, so it would correspond with the domain.
- (ii)
- (iii)
- The variety of human thinking, ambiguity, and diversity of natural language represents an issue in evaluating the correctness of UI terminology.
- (iv)
- No clear manual methods exist for the formal DU evaluation of existing UIs.
- (v)
- There are no standardized metrics to evaluate domain usability.
- (vi)
1.3. Paper Structure
2. Domain Usability Metrics Design
- —the number of domain content issues,
- —the count of domain specificity issues,
- —the number of consistency issues,
- —the count of language errors and barriers,
- —the number of world language issues.
3. Automatic Evaluation of Domain Usability Aspects
3.1. Domain Content and Specificity
- Any new terms are marked in the new UI ontology so that they can be checked by a domain expert.
- Renamed terms are marked for the same reason. We identify renamed items based on the representing component and its location in the component hierarchy.
- If the terms (UI components) were moved, then they are checked for consistency of their inclusion into the new group of terms (term hierarchy).
- Removed terms are marked in the old UI ontology because the domain experts, customers, or designers/developers should check whether their removal is reasonable.
- All terms (i.e., their representing components) that have undergone an illogical change are marked as a usability issue.
- a reference ontology modeling the specific domain and its language,
- generic ontological dictionaries or other sources of linguistic relations, such as web search.
Listing 1. Hierarchy of terms for selecting favorite color in the domain dictionary of the Person form. |
favoriteColor {children}: [ |
red |
yellow |
blue |
green |
] |
3.2. Consistency
3.3. Language Barriers and Errors
4. Prerequisites
- (a)
- General Application Terms Ontology—serves for filtering out non-domain related terms from the user interface,
- (b)
- Form Stereotype Recognizer—an algorithm making the analysis of forms more effective.
4.1. DEAL Method
- UI component that represents the term in the user interface;
- name—the label displayed on the component;
- description—the component’s tooltip if it is present;
- icon (if present);
- category of the component—either functional, informative, textual, grouping (container), or custom;
- type of input data—in the case of input components, the type can be string, number, date, boolean, or enumeration;
- relation to other terms—mutual (non-)exclusivity;
- parent term (usually corresponds to lexical relation of hypernymy or holonymy);
- child terms (usually correspond to hyponyms or meronyms).
Listing 2. The domain model of the Person form. |
domain: ’Person’ {children}: [ |
’Name’ {string} |
’Surname’ {string} |
’Date of birth’ {date} |
’Status’ {mutually-exclusive} |
{enumeration}[ |
’Single’ |
’Married’ |
’Divorced’ |
’Widowed’ |
] |
’Favorite color’ {mutually-not-exclusive} |
{children}: [ |
’red’ |
’yellow’ |
’blue’ |
’green’ |
] |
’OK’ |
’Close’ |
’Reset’ |
] |
4.2. General Application Terms Ontology
4.3. Recognizing Form Stereotypes
- Left—most common, the text labels are located left to the form component.
- Additional Right—similar to Left, but some form components have additional information added to the right of the component, e.g., validation messages.
- Above—labels are located above the form components.
- Additional Below—sometimes the cases with additional information occur under the particular form component. Usually, it is a text for showing another application window, in which the particular item is further explained, or it is a link with which the users are sent an email with new password activation in case of forgetting the old one.
- Placeholder—labels are located inside the designated form component. In HTML, this property is called a placeholder. This stereotype is becoming more and more common in modern web applications, although it is marked as less usable. In this case, there is rarely any other label around the form component.
- a descriptional text component (label),
- textual components (input fields, text fields, text areas, password fields, etc.),
- switches (radio buttons, checkboxes),
- spinners,
- tables,
- lists and combo-boxes.
5. ADUE Method
- Ontological analysis with two ontologies (Section 5.1),
- Specificity evaluation by analyzing the term hierarchies using ontological dictionaries or a web search (Section 5.2),
- Grammar evaluation by searching for grammar errors and typos using an existing linguistic dictionary of the target language (Section 5.3),
- Analysis of form components and their labels based on the form stereotype recognition method (Section 4.3)
- Tooltip analysis (Section 5.4).
5.1. Ontological Analysis
- The term’s text representation. A term has such a text representation only if its representing component has a description in the form of a label or a tooltip.
- ID of the representing component. This is mainly because of the ontology format, where every item has to have an identifier. We used the text attribute as an identifier and added numbering to ensure uniqueness.
- The class of the component: a button, label, text field, check box, radio button, etc.
- The term’s parent term.
- Children, i.e., the child terms.
- New elements—we do not consider newly added elements an issue. It is common that as user interfaces evolve in time, they get new and new features. However, the ADUE user should know about these changes to be able to check their correctness.
- Removed elements—these might or might not introduce an issue and feature depletion, depending on the evaluator whether the removal was justified.
- Changed elements—we consider correctly and incorrectly changed elements; incorrect changes are considered a usability issue. Incorrect changes include the illogical component type of changes described in Section 3.1.
5.2. Specificity Evaluation
- If any word from the input term set is a number, Google search is used first because it is optimal for numeric values.
- In other cases, WordNet is used first since it is effective and available without restrictions.
- If the probability of the result correctness using WordNet is lower than 80%, Urban Dictionary is tried as the next search engine.
- Because of the restricted automated use, Google search is used as a last option in case the probability of the result correctness using Urban Dictionary is lower than 80%.
5.2.1. WordNet
5.2.2. Urban Dictionary
5.2.3. Google Web Search
- {words separated by commas} are common values for
- {words separated by commas} are
5.3. Grammar Evaluation
5.4. Tooltip Analysis
- recommendation to add a tooltip—if the component has at least one user-readable textual description (e.g., label),
- usability issue—if either the component is general-purpose and has only an icon, or it is a domain-specific one with only an icon or only a textual label, this is considered a domain usability issue and is displayed to the evaluator.
6. Prototype
- the number of missing tooltips and incorrectly changed or deleted components is counted as domain content issues;
- the number of incorrectly defined parents is counted as domain specificity issues;
- the number of grammar errors is counted as language errors and barriers.
ADUE for Java Applications
7. Evaluation
7.1. Method
7.2. Results
7.2.1. Tooltip Analysis
7.2.2. Grammar and Specificity Evaluation
7.2.3. Ontological Comparison
7.2.4. Overall Domain Usability
7.2.5. Execution Time
7.3. Examples of Issues
7.4. Threats to Validity
7.5. Evaluation Conclusion
8. Potential of Existing Methods for DU Evaluation
8.1. Universal Techniques
8.2. User Testing Techniques
- Before the testing begins, the user is instructed to focus on domain terminology issues when performing the test.
- In the types of testing where the usability expert is present during the test, questions about term understandability are asked by the usability expert during each task of the scenario.
- The subject user is prompted to express proposals for new terminology for any item in the system and to explain why (s)he thinks the new terminology is appropriate for the particular item (incorrect, inapposite, does not reflect the given concept, etc.). Proposals from all users are recorded and evaluated for the most common ones that should serve as future replacements in the UI.
- If alternative translations of the UI are being tested, the testing should take place with the users naturally speaking the language of the translation. The users are prompted to propose a different translation for any item in the system and explain why they think the new translation is more appropriate for the particular item (incorrect or erroneous translation, more suitable term). Proposals from all users are recorded and evaluated for the most common ones. They can also be evaluated in a second phase where participants see the replaced terminology directly in the UI and check for correctness.
- In A/B testing, multiple versions of UIs with different terminology alternatives are created and tested by the users.
8.3. Inspection Methods
8.3.1. Specializations of General Methods
- Guideline review, cognitive walkthrough, heuristic evaluation techniques, formal usability inspection, and standards inspection: an expert performs the check focusing on domain terminology, consistency, and errors.
- To achieve the best results on the aforementioned techniques, the expert needs to be a domain expert.
- Another option is a pluralistic walkthrough technique, where one evaluator is an expert on usability and UX and the other is a domain expert. They both cooperate to imagine how the user would work with the design and try to find potential DU issues.
- Consistency inspection: the expert performs consistency checks across multiple systems and across the same system. The focus should be on the terminology, including:
- -
- different terms naming the same functionality or concepts (e.g., OK on one place, Confirm on the other);
- -
- same terms naming different functionality or concepts;
- -
- uppercase and lowercase letters consistency (e.g., File, file, FILE);
- -
- consistency of term hierarchies, properties, and relations.
8.3.2. Cognitive Walkthrough
- What is the user thinking at the beginning of the action? (Q1: Will the user try to achieve the right effect?)
- Is the user able to locate the command? (Q2: Will the user notice that the correct action is available? Is the action appropriately and consistently described by a domain-related term and/or understandable to the user?)
- Is the user able to identify the command? (Q3: Will the user associate the correct action with the effect that (s)he is trying to achieve? Is there any other action with a similar label and/or graphics, which would lead the user astray?)
- Is the user able to interpret the feedback? (Q4: If the correct action is performed, will the user see that progress is being made toward the solution of the task? Is the feedback reported to the user expressed in consistent terms and/or graphics understandable to the user?).
8.4. Inquiry
8.4.1. In-System User Feedback
- For web UIs, it is possible to create a system or a browser plug-in enabling the user to mark any inappropriate terminology in the UI and/or change the label or tooltip of the particular element in the UI. Every change is logged and sent to a central server where the evaluator can review the logs recorded from multiple users. The priority of change is calculated automatically by the number of users proposing a particular terminology change. The proposed terms can be assessed as a percentage according to the number of users proposing the same term.
- For any UI, a separate form can be made where the user selects one of the pre-prepared lists of application features (labeled and with icons for better recognizability) and sends comments on how and why to change the description of the particular feature. However, it is best to comment directly in the target UI because of the context.
- For both possibilities, the users can assess the appropriateness of a particular term using the approach by Isohella and Nissila. [8].
8.4.2. Surveys and Questionnaires
8.5. Analytical Modeling Techniques
8.6. Simulation Techniques
8.7. Automated Evaluation Methods
9. Related Work
9.1. Domain Content
- Textual content of UIs—Jacob Nielsen refers to DU aspects only too generally and stresses the importance of “the system’s addressing the user’s knowledge of the domain” [1].
- Domain dictionary, Ontology—the importance of domain dictionary of UIs is stressed also by Artemieva [60], Kleshchev [61], and Gribova [62], who also presented a method of estimating usability of a UI based on its model. Her model is rather component-oriented than focused specifically on the domain, and she focuses primarily on general usability evaluation methods such as having too many menu items in a menu.
- Domain structure—by Billman et al. [63]. Their experiment with NASA users showed that there is a big difference in the performance of users with respect to the usability of the old application and that of the the new, as the new application was better in domain-specific terminology structure.
- User interface semantics, Ambiguity—Tilly and Porkoláb [64] propose using semantic UIs (SUI) to solve the problem of the ambiguity of UI terminology. The core of SUI is a general ontology that is a basis for creating all UIs in the specific domain. User interfaces can have a different appearance and arrangement but the domain dictionary must remain the same. Ontologies in general also deal with the semantics of UIs.
- Complexity, reading complexity—Becker [65], Kincaid et al. [55], and Mahajan and Shneiderman [66] stress that the complexity of the textual content should not be too high because that would make the application less usable. Kincaid et al. refer to the reading complexity indices (ARI, Kincaid). Complexity is closely related to the domain content DU aspect: the UI should have the reading complexity appropriate for the target users.
- Matching with the real world or correspondence to the domain—Many of the above-listed authors, along with Badashian et al. [12], also stress the importance of applications corresponding to the real world and address the user’s domain knowledge. In fact, this is a more general description of our domain content DU aspect. Hilbert and Redmiles [47] stress the correspondence of event sequences with the real world as well as the domain dictionary.
- Knowledge aspect of UI design—One of the attributes of Eason’s usability definition [67] refers to the knowledge aspect of UI design representing the knowledge that the user applies to the task, and it may be appropriate or inappropriate. In general, the task match attribute of Eason’s definition also refers to processes mapping but does not explicitly target the mapping of specific domain tasks.
- Appropriateness recognizability—defined by ISO/IEC-25010 [68] as an aspect referring to the user understanding whether the software is appropriate for their needs and how it can be used for particular tasks and conditions of use. The term was redefined in 2011 from Understandability. However, again, the term appropriateness recognizability does not specifically refer to the target domain match,
9.2. Consistency
9.3. World Language, Language Barriers, Errors
9.4. All Domain Usability Aspects
10. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Nielsen, J. Usability Engineering; Morgan Kaufmann Publishers, Inc.: San Francisco, CA, USA, 1993. [Google Scholar]
- Norman, D. The Design of Everyday Things: Revised and Expanded Edition; Basic Books: New York, NY, USA, 2013. [Google Scholar]
- Morville, P. User Experience Design. 2004. Available online: http://semanticstudios.com/user_experience_design (accessed on 9 August 2021).
- Lewis, J.R. Usability: Lessons Learned … and Yet to Be Learned. Int. J. Hum. Comput. Interact. 2014, 30, 663–684. [Google Scholar] [CrossRef]
- ISO-9241-11. Ergonomics of Human-System Interaction—Part 11: Usability: Definitions and Concepts; ISO: Geneva, Switzerland, 2018. [Google Scholar]
- Chilana, P.K.; Wobbrock, J.O.; Ko, A.J. Understanding Usability Practices in Complex Domains. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’10), Atlanta, GA, USA, 10–15 April 2010; ACM: New York, NY, USA, 2010; pp. 2337–2346. [Google Scholar] [CrossRef]
- Gulliksen, J. Designing for Usability—Domain Specific Human-Computer Interfaces in Working Life; Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science & Technology; Acta Universitatis Upsaliensis: Uppsala, Switzerland, 1996; p. 28. [Google Scholar]
- Isohella, S.; Nissila, N. Connecting usability with terminology: Achieving usability by using appropriate terms. In Proceedings of the 2015 IEEE International Professional Communication Conference (IPCC ’15), Limerick, Ireland, 12–15 July 2015; pp. 1–5. [Google Scholar] [CrossRef]
- Lanthaler, M.; Gütl, C. Model Your Application Domain, Not Your JSON Structures. In Proceedings of the 22nd International Conference on World Wide Web, Rio de Janeiro, Brazil, 13–17 May 2013; ACM: New York, NY, USA, 2013; pp. 1415–1420. [Google Scholar] [CrossRef]
- Bačíková, M.; Galko, L. The design of manual domain usability evaluation techniques. Open Comput. Sci. 2018, 8, 51–67. [Google Scholar] [CrossRef] [Green Version]
- Bačíková, M.; Porubän, J. Domain Usability, User’s Perception. In Human-Computer Systems Interaction: Backgrounds and Applications 3; Springer International Publishing: Cham, Switzerland, 2014; pp. 15–26. [Google Scholar] [CrossRef]
- Badashian, A.S.; Mahdavi, M.; Pourshirmohammadi, A.; Nejad, M.M. Fundamental Usability Guidelines for User Interface Design. In Proceedings of the 2008 International Conference on Computational Sciences and Its Applications (ICCSA ’08), Perugia, Italy, 30 June–3 July 2008; IEEE Computer Society: Washington, DC, USA, 2008; pp. 106–113. [Google Scholar] [CrossRef]
- W3C. Web Content Accessibility Guidelines (WCAG) 2.0, Part 3 about Understandability. 2008. Available online: https://www.w3.org/TR/WCAG20/#understandable (accessed on 9 August 2021).
- Kolski, C.; Millot, P. A rule-based approach to the ergonomic “static” evaluation of man-machine graphic interface in industrial processes. Int. J. Man Mach. Stud. 1991, 35, 657–674. [Google Scholar] [CrossRef]
- Sears, A. AIDE: A Step toward Metric-Based Interface Development Tools. In Proceedings of the 8th Annual ACM Symposium on User Interface and Software Technology (UIST ’95), Pittsburgh, PA, USA, 15–17 November 1995; Association for Computing Machinery: New York, NY, USA, 1995; pp. 101–110. [Google Scholar] [CrossRef]
- Ivory, M.Y.; Hearst, M.A. The state of the art in automating usability evaluation of user interfaces. ACM Comput. Surv. 2001, 33, 470–516. [Google Scholar] [CrossRef]
- Tullis, T.S. The Formatting of Alphanumeric Displays: A Review and Analysis. Hum. Factors 1983, 25, 657–682. [Google Scholar] [CrossRef] [PubMed]
- Paz, F.; Pow-Sang, J.A. Current Trends in Usability Evaluation Methods: A Systematic Review. In Proceedings of the 2014 7th International Conference on Advanced Software Engineering and Its Applications, Hainan, China, 20–23 December 2014; pp. 11–15. [Google Scholar] [CrossRef]
- Bakaev, M.; Mamysheva, T.; Gaedke, M. Current trends in automating usability evaluation of websites: Can you manage what you ca not measure? In Proceedings of the 2016 11th International Forum on Strategic Technology (IFOST), Novosibirsk, Russia, 1–3 June 2016; pp. 510–514. [Google Scholar] [CrossRef]
- Namoun, A.; Alrehaili, A.; Tufail, A. A Review of Automated Website Usability Evaluation Tools: Research Issues and Challenges. In Design, User Experience, and Usability: UX Research and Design; Springer: Cham, Switzerland, 2021; pp. 292–311. [Google Scholar] [CrossRef]
- Bačíková, M.; Porubän, J. Ergonomic vs. domain usability of user interfaces. In Proceedings of the 2013 The 6th International Conference on Human System Interaction (HSI), Sopot, Poland, 6–8 June 2013; pp. 159–166. [Google Scholar] [CrossRef]
- Bačíková, M.; Zbuška, M. Towards automated evaluation of domain usability. In Proceedings of the 2015 IEEE 13th International Scientific Conference on Informatics, Poprad, Slovakia, 18–20 November 2015; pp. 41–46. [Google Scholar] [CrossRef]
- Bačíková, M.; Galko, L.; Hvizdová, E. Manual techniques for evaluating domain usability. In Proceedings of the 2017 IEEE 14th International Scientific Conference on Informatics, Poprad, Slovakia, 14–16 November 2017; pp. 24–30. [Google Scholar] [CrossRef]
- Bačíková, M.; Galko, L.; Hvizdová, E. Experimental Design of Metrics for Domain Usability. In Proceedings of the International Conference on Computer-Human Interaction Research and Applications (CHIRA 2017), Funchal, Portugal, 31 October 2017; Volume 1, pp. 118–125. [Google Scholar] [CrossRef]
- Galko, L.; Bačíková, M. Experiments with automated evaluation of domain usability. In Proceedings of the 2016 9th International Conference on Human System Interactions (HSI), Portsmouth, UK, 6–8 July 2016; pp. 252–258. [Google Scholar] [CrossRef]
- Tomoko, N.; Beglar, D. Developing Likert-Scale Questionnaires. In JALT Conference Proceedings; Sonda, N., Krause, A., Eds.; JALT: Tokyo, Japan, 2014; pp. 1–8. [Google Scholar]
- Varanda Pereira, M.J.; Fonseca, J.; Henriques, P.R. Ontological approach for DSL development. Comput. Lang. Syst. Struct. 2016, 45, 35–52. [Google Scholar] [CrossRef] [Green Version]
- Bačíková, M. Domain Analysis of Graphical User Interfaces of Software Systems (extended dissertation abstract). In Information Sciences and Technologies; Bulletin of the ACM Slovakia; STU Press: Bratislava, Slovakia, 2014; Volume 6, pp. 17–23. [Google Scholar]
- Bačíková, M.; Porubän, J.; Lakatoš, D. Defining Domain Language of Graphical User Interfaces. In Proceedings of the Symposium on Languages Applications and Technologies (SLATE), Porto, Portugal, 20–21 June 2013; pp. 187–202. [Google Scholar] [CrossRef]
- Vrandečić, D.; Krötzsch, M. Wikidata: A Free Collaborative Knowledgebase. Commun. ACM 2014, 57, 78–85. [Google Scholar] [CrossRef]
- Huynh, D.F.; Li, G.; Ding, C.; Huang, Y.; Chai, Y.; Hu, L.; Chen, J. Generating Insightful Connections between Graph Entitites. U.S. Patent 20140280044, 14 July 2020. [Google Scholar]
- Lewis, C. Using the “Thinking-Aloud” Method in Cognitive Interface Design; Technical Report; IBM, T. J. Watson Research Center: New York, NY, USA, 1982. [Google Scholar]
- Kato, T. What “question-asking protocols” can say about the user interface. Int. J. Man Mach. Stud. 1986, 25, 659–673. [Google Scholar] [CrossRef]
- Lund, A.M. Expert Ratings of Usability Maxims. Ergon. Des. Q. Hum. Factors Appl. 1997, 5, 15–20. [Google Scholar] [CrossRef]
- Marciniak, J. Encyclopedia of Software Engineering, 2nd ed.; Wiley: Chichester, UK, 2002. [Google Scholar]
- Nielsen, J. The Use and Misuse of Focus Groups. 1997. Available online: http://www.nngroup.com/articles/focus-groups/ (accessed on 9 August 2021).
- Stull, E. User Testing. In UX Fundamentals for Non-UX Professionals; Apress: New York, NY, USA, 2018; pp. 311–317. [Google Scholar] [CrossRef]
- Boehm, B.W.; Brown, J.R.; Lipow, M. Quantitative Evaluation of Software Quality. In Proceedings of the 2nd International Conference on Software Engineering (ICSE ’76), San Francisco, CA, USA, 13–15 October 1976; IEEE Computer Society Press: Washington, DC, USA, 1976; pp. 592–605. [Google Scholar]
- Nielsen, J.; Mack, R.L. (Eds.) Usability Inspection Methods; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1994. [Google Scholar]
- Wharton, C.; Rieman, J.; Lewis, C.; Polson, P. The Cognitive Walkthrough Method: A Practitioner’s Guide. In Usability Inspection Methods; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1994; pp. 105–140. [Google Scholar]
- Mahatody, T.; Sagar, M.; Kolski, C. State of the Art on the Cognitive Walkthrough Method, Its Variants and Evolutions. Int. J. Hum.-Comput. Interact. 2010, 26, 741–785. [Google Scholar] [CrossRef]
- González, M.P.; Loréss, J.; Granollers, A. Assessing Usability Problems in Latin-American Academic Webpages with Cognitive Walkthroughs and Datamining Techniques. In Usability and Internationalization. HCI and Culture; Aykin, N., Ed.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 306–316. [Google Scholar] [CrossRef]
- Krueger, R.A.; Casey, M.A. Focus Groups: A Practical Guide for Applied Research, 5th ed.; SAGE Publications Inc.: Thousand Oaks, CA, USA, 2015. [Google Scholar]
- Flanagan, J.C. The critical incident technique. Psychol. Bull. 1954, 51, 327–358. [Google Scholar] [CrossRef] [Green Version]
- Harper, B.D.; Norman, K.L. Improving user satisfaction: The questionnaire for user interaction satisfaction version 5.5. In Proceedings of the 1st Annual Mid-Atlantic Human Factors Conference, Virginia Beach, VA, USA, 25–26 February 1993; pp. 224–228. [Google Scholar]
- Tullis, T.S.; Stetson, J.N. A comparison of questionnaires for assessing website usability. In Proceedings of the Usability Professional Association Conference, Minneapolis, MN, USA, 7–11 June 2004; pp. 1–12. [Google Scholar]
- Hilbert, D.M.; Redmiles, D.F. Extracting usability information from user interface events. ACM Comput. Surv. 2000, 32, 384–421. [Google Scholar] [CrossRef]
- Assila, A.; de Oliveira, K.M.; Ezzedine, H. Standardized Usability Questionnaires: Features and Quality Focus. Electron. J. Comput. Sci. Inf. Technol. 2016, 6, 15–31. [Google Scholar]
- Bangor, A.; Kortum, P.; Miller, J. Determining What Individual SUS Scores Mean: Adding an Adjective Rating Scale. J. Usability Stud. 2009, 4, 114–123. [Google Scholar]
- McLellan, S.; Muddimer, A.; Peres, S.C. The Effect of Experience on System Usability Scale Ratings. J. Usability Stud. 2012, 7, 56–67. [Google Scholar]
- Brooke, J. SUS: A Retrospective. J. Usability Stud. 2013, 8, 29–40. [Google Scholar]
- John, B.E.; Kieras, D.E. The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast. ACM Trans. Comput. Hum. Interact. 1996, 3, 320–351. [Google Scholar] [CrossRef]
- Kieras, D. Chapter 31—A Guide to GOMS Model Usability Evaluation using NGOMSL. In Handbook of Human-Computer Interaction, 2nd ed.; North-Hollan: Amsterdam, The Netherlands, 1997; pp. 733–766. [Google Scholar] [CrossRef]
- Clark, R.E.; Feldon, D.F.; van Merriënboer, J.J.G.; Kenneth, A.Y.; Early, S. Cognitive Task Analysis. In Handbook of Research on Educational Communications and Technology; Routledge: London, UK, 2007; Chapter 43. [Google Scholar] [CrossRef]
- Kincaid, J.P.; Fishburne, R.P.; Rogers, R.L.; Chissom, B.S. Derivation of New Readability Formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy Enlisted Personnel; Technical Report; University of Central Florida: Orlando, FL, USA, 1975. [Google Scholar]
- Kincaid, J.P.; McDaniel, W.C. An Inexpensive Automated Way of Calculating Flesch Reading Ease Scores; Patient Disclosure Document 031350; US Patient Office: Washington, DC, USA, 1974.
- Young, R.M.; Green, T.R.G.; Simon, T. Programmable User Models for Predictive Evaluation of Interface Designs. SIGCHI Bull. 1989, 20, 15–19. [Google Scholar] [CrossRef]
- Porubän, J.; Bačíková, M. Definition of Computer Languages via User Interfaces; Technical University of Košice: Košice, Slovakia, 2010; pp. 53–57. [Google Scholar]
- Mahajan, R.; Shneiderman, B. Visual and Textual Consistency Checking Tools for Graphical User Interfaces. IEEE Trans. Softw. Eng. 1997, 23, 722–735. [Google Scholar] [CrossRef]
- Artemieva, I.L. Ontology development for domains with complicated structures. In Proceedings of the First International Conference on Knowledge Processing and Data Analysis (KONT’07/KPP’07), Novosibirsk, Russia, 14–16 September 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 184–202. [Google Scholar] [CrossRef]
- Kleshchev, A.S. How can ontologies contribute to software development? In Proceedings of the First International Conference on Knowledge Processing and Data Analysis (KONT’07/KPP’07), Novosibirsk, Russia, 14–16 September 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 121–135. [Google Scholar] [CrossRef]
- Gribova, V. A Method of Estimating Usability of a User Interface Based on its Model. Int. J. Inf. Theor. Appl. 2007, 14, 43–47. [Google Scholar]
- Billman, D.; Arsintescucu, L.; Feary, M.; Lee, J.; Smith, A.; Tiwary, R. Benefits of matching domain structure for planning software: The right stuff. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems (CHI ’11), Vancouver, BC, Canada, 7–12 May 2011; ACM: New York, NY, USA, 2011; pp. 2521–2530. [Google Scholar] [CrossRef]
- Tilly, K.; Porkoláb, Z. Automatic classification of semantic user interface services. In Proceedings of the Ontology-Driven Software Engineering (ODiSE’10), Reno, NV, USA, 17–21 October 2010; ACM: New York, NY, USA, 2010; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
- Becker, S.A. A study of web usability for older adults seeking online health resources. ACM Trans. Comput. Hum. Interact. 2004, 11, 387–406. [Google Scholar] [CrossRef]
- Shneiderman, B. Response time and display rate in human performance with computers. ACM Comput. Surv. 1984, 16, 265–285. [Google Scholar] [CrossRef]
- Eason, K.D. Towards the experimental study of usability. Behav. Inform. Technol. 1984, 3, 133–143. [Google Scholar] [CrossRef]
- ISO/IEC-25010. Systems and Software Engineering—Systems and Software, Quality Requirements and Evaluation (SQuaRE)—System and Software Quality Models; ISO: Geneva, Switzerland, 2011. [Google Scholar]
- Shackel, B. Usability—Context, framework, definition, design and evaluation. Hum. Factors Inform. Usability 1991, 21, 21–38. [Google Scholar] [CrossRef]
- Madan, A.; Kumar, S. Usability evaluation methods: A literature review. Int. J. Eng. Sci. Technol. 2012, 4, 590–599. [Google Scholar]
- Kordić, S.; Ristić, S.; Čeliković, M.; Dimitrieski, V.; Luković, I. Reverse Engineering of a Generic Relational Database Schema Into a Domain-Specific Data Model. In Proceedings of the Central European Conference on Information and Intelligent Systems, Varaždin, Croatia, 27–29 September 2017; pp. 19–28. [Google Scholar]
Rating | Interpretation |
---|---|
Excellent | |
Very good | |
Good | |
Satisfactory | |
less than 55% | Insufficient |
Term | Occurrence | Most Common UI Element |
---|---|---|
About|Credits | 90% | Menu item |
Apply | 87% | Button |
Cancel | 97% | Button |
Close|Exit|Quit | 100% | Button|Menu item |
Copy | 70% | Menu item |
Cut | 70% | Menu item |
Edit | 70% | Menu |
File | 90% | Menu |
Help | 80% | Menu|Menu item |
New | 90% | Menu item |
OK | 97% | Button |
Open | 83% | Button|Menu item |
Paste | 70% | Menu item |
Plug-ins|Extensions | 40% | Menu|Menu item |
Preferences|Settings | 60% | Menu|Menu item |
Redo | 83% | Menu item |
Save | 83% | Button|Menu item |
Save as | 83% | Menu item |
Tools | 53% | Menu |
Undo | 83% | Menu item |
View | 63% | Menu |
Window | 70% | Menu |
Application | Terms | Tooltip Errors | Tooltip Warnings | Grammar Errors | Incorrect Parents | e | Execution Time | |
---|---|---|---|---|---|---|---|---|
Calculator | 40 | 0 | 0 | 1 | 0 | 1.7 | 96 | 0 s |
Sweet Home 3D | 200 | 13 | 11 | 4 | 17 | 84.4 | 58 | 2 m 0 s |
FreeMind 2014 | 273 | 1 | 94 | 14 | 17 | 68.3 | 75 | 1 m 50 s |
FreePlane 2015 * | 873 | 13 | 323 | 128 | 33 | 833.5 | 5 | 5 m 6 s |
Finanx | 74 | 39 | 9 | 4 | 8 | 140.7 | 90 | 36 s |
JarsBrowser | 19 | 0 | 8 | 2 | 5 | 16.4 | 14 | 8 s |
BaseFormApplication | 74 | 0 | 8 | 11 | 8 | 42.1 | 43 | 42 s |
JavaNotePad | 19 | 0 | 17 | 0 | 5 | 13.0 | 32 | 32 s |
TimeSlotTracker | 62 | 6 | 36 | 7 | 10 | 55.0 | 11 | 55 s |
Gait Monitoring+ | 70 | 0 | 17 | 0 | 7 | 18.2 | 74 | 29 s |
Activity Prediction Tool | 98 | 1 | 84 | 2 | 11 | 33.2 | 66 | 1 m 19 s |
VOpR | 96 | 0 | 21 | 23 | 8 | 59.9 | 38 | 44 s |
GDL Editor 0.9 | 73 | 4 | 8 | 4 | 11 | 45.3 | 38 | 58 s |
GDL Editor 0.95 * | 75 | 4 | 8 | 4 | 11 | 61.5 | 18 | 15 s |
Application | Original Application | New Terms | Deleted Terms | Changed Terms | Incorrectly Changed Terms |
---|---|---|---|---|---|
FreePlane 2015 | FreeMind 2014 | 748 | 168 | 93 | 0 |
GDL Editor 0.95 | GDL Editor 0.9 | 7 | 5 | 4 | 0 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bačíková, M.; Porubän, J.; Sulír, M.; Chodarev, S.; Steingartner, W.; Madeja, M. Domain Usability Evaluation. Electronics 2021, 10, 1963. https://doi.org/10.3390/electronics10161963
Bačíková M, Porubän J, Sulír M, Chodarev S, Steingartner W, Madeja M. Domain Usability Evaluation. Electronics. 2021; 10(16):1963. https://doi.org/10.3390/electronics10161963
Chicago/Turabian StyleBačíková, Michaela, Jaroslav Porubän, Matúš Sulír, Sergej Chodarev, William Steingartner, and Matej Madeja. 2021. "Domain Usability Evaluation" Electronics 10, no. 16: 1963. https://doi.org/10.3390/electronics10161963
APA StyleBačíková, M., Porubän, J., Sulír, M., Chodarev, S., Steingartner, W., & Madeja, M. (2021). Domain Usability Evaluation. Electronics, 10(16), 1963. https://doi.org/10.3390/electronics10161963