Next Article in Journal
How Can We Achieve Query Keyword Frequency Analysis in Privacy-Preserving Situations?
Previous Article in Journal
Feature Construction Using Persistence Landscapes for Clustering Noisy IoT Time Series
 
 
Review
Peer-Review Record

Task Automation Intelligent Agents: A Review

Future Internet 2023, 15(6), 196; https://doi.org/10.3390/fi15060196
by Abdul Wali, Saipunidzam Mahamad * and Suziah Sulaiman *
Reviewer 1:
Reviewer 2:
Reviewer 3:
Future Internet 2023, 15(6), 196; https://doi.org/10.3390/fi15060196
Submission received: 3 April 2023 / Revised: 17 May 2023 / Accepted: 22 May 2023 / Published: 29 May 2023

Round 1

Reviewer 1 Report

The article proposes a literature review on intelligent task automation agents. The presented meta-analysis follows the PRISMA guidelines, outlining a transparent literature selection process with clear inclusion and exclusion criteria. However, instead of focusing on one core analysis stream the article presents two different research questions, one focusing on the state-of-the-art in task automation intelligent agents (i.e., RQ1), and the second on usability heuristics specifically for intelligent agents (RQ2). At first sight, this seems to be a valid approach as both RQs propose to investigate intelligent agents (IA) for task automation. Yet, unfortunately, the presented results do not exhibit this overlap at all, as the RQ2 part seems to heavily neglect IA and task automation aspects. In fact, from the 3.5 pages of results dedicated to RQ2 (approx. the same amount as dedicated to RQ1), those related to IA take up less than half a page and cover only 4 reviewed articles. The remaining results on RQ2 describe all sorts of general studies focusing on heuristic analysis, all of which seem to be completely unrelated to the topic at hand. In other words, the article essentially presents two literature reviews with very little overlap. I wonder why it was decided two even include this second RQ if it does not really add much to analysis focus, i.e. the usability of task automating agents. Thus, I would recommend to either completely abandon RQ2 and more deeply elaborate on RQ1, or, in case RQ2 really needs to be kept, to only focus on studies that actually evaluate the usability of task automating agents. Then, I guess, the article could provide a valuable contribution to the field. In its current form, however, it misses this clear investigation focus.

Additional note: The conclusion section states that “this study aims to develop usability heuristics for task automating intelligent agents”, yet the article merely reviews existing work in this field.

Additional formal note: Please use Author names in front of citation numbers (e.g., Assistive Macros by [46] => Assistive Macros by Rodrigues [46]) as the current form significantly hinders legibility, particularly with these kinds of review articles! Also, if you report on different sub-categories, please report on a category named “other” at the end (cf. Line 430)!

Line 58: converse with children => conversing with children

Line 79: Similarly a study => similarly a study by

Line 104: by the research => by the research of

Line 168: and UI => and the UI

Line 199/200: therefore had => therefore it had

Line 289: 3.1.2 => 3.1.3

Line 319: it uses android accessibility => it uses the android accessibility

Line 574: present user usability => present usability

Line 575: If a user needs to understand => Only if a user understands

Author Response

Response to Reviewer 1 Comments

 

Point 1: The article proposes a literature review on intelligent task automation agents. The presented meta-analysis follows the PRISMA guidelines, outlining a transparent literature selection process with clear inclusion and exclusion criteria. However, instead of focusing on one core analysis stream the article presents two different research questions, one focusing on the state-of-the-art in task automation intelligent agents (i.e., RQ1), and the second on usability heuristics specifically for intelligent agents (RQ2). At first sight, this seems to be a valid approach as both RQs propose to investigate intelligent agents (IA) for task automation. Yet, unfortunately, the presented results do not exhibit this overlap at all, as the RQ2 part seems to heavily neglect IA and task automation aspects. In fact, from the 3.5 pages of results dedicated to RQ2 (approx. the same amount as dedicated to RQ1), those related to IA take up less than half a page and cover only 4 reviewed articles. The remaining results on RQ2 describe all sorts of general studies focusing on heuristic analysis, all of which seem to be completely unrelated to the topic at hand. In other words, the article essentially presents two literature reviews with very little overlap. I wonder why it was decided to even include this second RQ if it does not really add much to analysis focus, i.e., the usability of task automating agents. Thus, I would recommend either completely abandon RQ2 and more deeply elaborate on RQ1, or, in case RQ2 really needs to be kept, to only focus on studies that actually evaluate the usability of task automating agents. Then, I guess, the article could provide a valuable contribution to the field. In its current form, however, it misses this clear investigation focus.

 

Response 1: Respectfully, the comments from the reviewer have helped the author a lot in polishing the manuscript by doing a critical analysis of the manuscript. These comments point out a major challenge of the research, which has been discussed by the authors. However, the main focus of this study is on the heuristic evaluation guidelines or usability heuristics of task automation intelligent agents (RQ2). The first research question is to show the significance and why RQ2. is done, showing the development done in this field. The RQ1. is to demonstrate that the programming-by-demonstration, which is an end-user development approach, is becoming a significant method of making usable applications and systems; and for that Appendix A has been included. Also, these applications and systems do not follow any specified guidelines or heuristics for enhancing the usability; therefore, there are usability issues in existing applications and systems and are not widely adopted by users. To investigate the reasons behind these usability issues, RQ2. has been formulated. The RQ2. shows that in the last half-decade, the scientific work that has been accomplished thus far is far below the expected level in analysing the usability of these task automation intelligent agents. By overviewing these two research questions, a significant research gap could easily be noticed that exponential development is being done in making task automation intelligent agents more intelligent; however, no specific guidelines or heuristics are available or established for the designers and developers to evaluate usability and create user-friendly intelligent agents. The existing work significantly focusses on usability of intelligent user interfaces (IUIs); this field encompasses a wide range of areas and technologies, e.g., artificial intelligent systems, intelligent personal assistants, virtual agents, recommender systems, conversational agents, and many other technologies exhibiting intelligent behaviour. Although IUIs are getting significant attention, there has been relatively little work done to evaluate task automation intelligent agents.

After detailed discussion of authors, it was decided that both the research questions impact the study on their own level, and therefore should be included. RQ2. reviews all the studies done in terms of developing usability heuristics in the recent half-decade, and the 4 reviewed articles show the ratio of usability work done in context of intelligent agents (IAs).

 

Point 2: The conclusion section states that “this study aims to develop usability heuristics for task automating intelligent agents”, yet the article merely reviews existing work in this field.

 

Response 2: The reviewer’s comment on conclusion section helped the authors to discuss more and rewrite the conclusion section. Also, the articles included in this study show the number of articles available in this field. Therefore, the author aims to develop usability heuristics for task automation intelligent agents by reviewing the existing literature on usability of intelligent user interfaces and intelligent agents.

 

Point 3: Please use Author names in front of citation numbers (e.g., Assistive Macros by [46] => Assistive Macros by Rodrigues [46]) as the current form significantly hinders legibility, particularly with these kinds of review articles! 

 

Response 3: The manuscript has been updated and the names of the first authors have been included in front of respective citation numbers.

 

Point 4: also, if you report on different sub-categories, please report on a category named “other” at the end (cf. Line 430)!

 

Response 4: the sub-category heading name has been changed to “Other Domains” which includes the review of the articles other than mentioned categories and has been shifted to the end on line 613.

 

Point 5: Line 58: converse with children => conversing with children

 

Response 5: line 58 corrected.

 

Point 6: Line 79: Similarly a study => similarly a study by

 

Response 6: line 79 corrected.

 

Point 7: Line 104: by the research => by the research of

 

Response 7: line 104 corrected.

 

Point 8: Line 168: and UI => and the UI

 

Response 8: line 168 corrected.

 

Point 9: Line 199/200: therefore had => therefore it had

 

Response 9: line 199/200 corrected.

 

Point 10: Line 289: 3.1.2 => 3.1.3

 

Response 10: line 289 corrected.

 

Point 11: Line 319: it uses android accessibility => it uses the android accessibility

 

Response 11: line 319 corrected.

 

Point 12: Line 574: present user usability => present usability

 

Response 12: line 574 corrected.

 

Point 13: Line 575: If a user needs to understand => Only if a user understands

 

Response 13: line 575 corrected.

 

 

Author Response File: Author Response.docx

Reviewer 2 Report

In this study, two research questions are being evaluated through available literature to expand the research on intelligent TA agents; (1) what is the state-of-the-art in TA agents? (2) what are the existing methods and techniques for developing usability heuristics, specifically for IAs? Research shows groundbreaking development has been done in mobile phone task automation recently. Still, it must be done per usability principles to achieve maximum usability and user satisfaction. The second research question further justifies developing a set of domain-specific usability heuristics for mobile TA agents.

 

This review is interesting and topic is hot, however the authors did not provide some simple tables or pictures for the readers to figure out the SOTA results and future directions.

 

For the method, the authors just listed ref. [26], what this means?

 

In Figures 2 and 3, the letters of the head are too big.

 

After the conclusion, what the big wants to tell us?

Moderate editing of English language.

Author Response

Response to Reviewer 2 Comments

 

Point 1: This review is interesting, and the topic is hot, however the authors did not provide some simple tables or pictures for the readers to figure out the SOTA results and future directions.

 

Response 1: I would like to start by thanking the reviewer for motivating and encouraging remarks and comments on the manuscript. These comments have greatly encouraged the author regarding the motivation and direction of this work. To clarify the reviewer’s comment, as discussed in the introduction section, among the task automation techniques and methods, the programming-by-demonstration approach is considered to be the state-of-the-art because this approach allows users to automate their tasks even if a user does not have any professional knowledge or coding experience, and who only knows how to perform the task. To further explain the working of SOTA, the below figure shows the working of Sugilite, which has been included in the manuscript on line 330.  

 

 

Point 2: For the method, the authors just listed ref. [26], what does this means?

 

Response 2: The introduction section of the methodology has been extended for better understanding to readers; “This study uses the search strategy previously used by Qiu, et al. [26]. This is a systematic methodology which includes conducting a search on databases with search terms and analysing articles based on inclusion and exclusion criteria. After the articles are analysed, a full-text review is conducted on selected articles.”

 

Point 3: In Figures 2 and 3, the letters of the head are too big.

Response 3: The letters of the head are corrected. Updated font size is 12.

 

Point 4: After the conclusion, what does the big want to tell us?

Response 4: Respectfully, the conclusion section has been rewritten and changes have been made for better understanding. In the conclusion section, the author discusses the need to develop usability heuristics for evaluating task automation intelligent agents.

 

Point 5: Moderate editing of English language.

Response 5: Premium version of Grammarly has been used and mistakes are corrected.

Author Response File: Author Response.docx

Reviewer 3 Report

The article discusses how Machine Learning (ML) and Artificial Intelligence (AI) have enabled the automation of various tasks, known as Task Automation (TA), which users can perform through creating automation rules or Intelligent Agents (IA) such as virtual personal assistants. The Programming by Demonstration (PbD) technique has shown significant developmental growth due to its user-centered approach, and popular TA agents include Apple Siri, Google Assistant, MS Cortana, and Amazon Alexa. However, usability issues have limited the widespread adoption of TA agents. The article aims to evaluate the state-of-the-art in TA agents and existing methods for developing usability heuristics, specifically for IAs. The research emphasizes developing domain-specific usability heuristics for mobile TA agents to achieve maximum usability and user satisfaction. While the topic is essential and current, the manuscript require major revision.

1.     The authors have created acronyms for Task Automation and Intelligent Agent as TA and IA, respectively. For instance, the authors used the term "TA agents" in research question 1, which is a combination of one acronym and one abbreviation. It is difficult and confusing to read the paper this way. I would appreciate it if the authors could come up with different acronyms or use the full words instead.

2.     The first research question is "What is the state-of-the-art in TA agents?" This is a very general question. The authors need to be more specific about what they want to know about TA agents.

3.     The authors need to explain the motivation for the research questions, which is missing from the current manuscript.

4.     There is no rationale for why the authors are extracting data from papers. It is confusing to see how they ended up explaining the categories of desktop-based TA systems and web-based TA systems. Table 1 shows the heuristic development methods; however, the content in the heuristic development methods are the different types of papers, such as literature reviews and usability issues. These are not the heuristic development methods.

5.     Line number 338 shows that 39 review papers were selected for this review. However, Table 1 shows a total of 40 papers, which is contradictory. Overall, there are major flaws in the literature review methodology, and there is no explanation of how the results were derived. The conclusions are not inferred from the results.

6.     The authors did not mention the significance of this study, and there are no limitations or significance of the study mentioned. A major revision is required, and several sections need to be revised.

 

The manuscript is hard to read, with confusing acronyms and poor organization of the paper. 

Author Response

Response to Reviewer 3 Comments

 

Point 1: The authors have created acronyms for Task Automation and Intelligent Agent as TA and IA, respectively. For instance, the authors used the term "TA agents" in research question 1, which is a combination of one acronym and one abbreviation. It is difficult and confusing to read the paper this way. I would appreciate it if the authors could come up with different acronyms or use the full words instead.

 

Response 1: I would like to thank the reviewer for pointing out this mistake which would have caused confusion to the readers of this manuscript. As per the reviewer’s suggestion, all the acronyms have been changed to full works.

 

 

Point 2: The first research question is "What is the state-of-the-art in TA agents?" This is a very general question. The authors need to be more specific about what they want to know about TA agents.

 

Response 2: I would like to thank the reviewer for their comment. In clarification, the main focus of this study is on research question 2 (RQ2.). However, the RQ2 on its own would have caused more confusion because if the reader does not know about the state-of-the-art systems and applications available. To strengthen the foundation of RQ2, the RQ1 has been formulated which gives an overview of the user-centred approach for task automation, which in literature is programming-by-demonstration and considered as effective for end-user development. 

 

Point 3: The authors need to explain the motivation for the research questions, which is missing from the current manuscript.

 

Response 3: respectfully, the motivation for this research work has been included in the 5th paragraph of introduction section (line 98 – 102). For reviewer’s ease, the motivation has been shared below in inverted commas.

“Usability is a crucial factor to consider in designing task automation intelligent agents, as it can significantly affect the user experience and overall effectiveness of the systems [21, 22], such that good usability can lead to higher user satisfaction and increased productivity, and improved efficiency [23]. Conversely, poor usability can lead to frustration, decreased productivity, and decreased efficiency [11].”

 

Point 4: There is no rationale for why the authors are extracting data from papers. It is confusing to see how they ended up explaining the categories of desktop-based TA systems and web-based TA systems. Table 1 shows the heuristic development methods; however, the content in the heuristic development methods are the different types of papers, such as literature reviews and usability issues. These are not heuristic development methods.

 

Response 4: Respectfully, the categories like desktop-based systems and mobile-based systems are done in order to simplify the review of the state-of-the-art intelligent systems. Based on programming-by-demonstration approach, different systems have been developed in different domains. However, all the desktop-based systems and applications are categorized as one group and online systems and applications are categorized as separate groups. Furthermore, in recent decades, the developments done in mobile-based systems and applications are categorized in one group.

Similarly, table 1 shows the methods used by researchers which are included in this review. This table is to provide an overview about which methods have been used in the last half-decade to develop usability heuristics by other researchers.   

 

Point 5: Line number 338 shows that 39 review papers were selected for this review. However, Table 1 shows a total of 40 papers, which is contradictory. Overall, there are major flaws in the literature review methodology, and there is no explanation of how the results were derived. The conclusions are not inferred from the results.

 

Response 5: Respectfully, there was a typing mistake in table 1 which showed 40 total review papers. However, this mistake has been corrected.

The results of this review has been discussed in the discussion chapter. This discussion provides an overview of what has been done in task automation intelligent agents using programming-by-demonstration approach and also discusses the focus of usability studies in recent years.

Point 6: The authors did not mention the significance of this study, and there are no limitations or significance of the study mentioned. A major revision is required, and several sections need to be revised.

Response 6: The significance of this study has been rewritten in more detail and included in the introduction section (line 129-138). “Developing usability heuristics for task automation intelligent agents is important because of the nature of intelligent agent which requires interaction with humans, this interaction should be natural and intuitive, and the agent should be able to understand and respond to user input in a way which is accurate and reliable, also ensuring that the agents are user-friendly and easy to use. Developing domain specific usability heuristics for task automation intelligent agents can help ensure that these agents meet the requirements and provide guidelines for designing agents that are understandable and easy to use. Overall, developing usability heuristics for task automation intelligent agents is crucial and necessary to provide a positive and effective human-computer interaction experience.”

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

I appreciate that the authors have considered most of my formal suggestions. As for my remark concerning the references, I believe there was a misunderstanding. I meant to include the names of authors in case they are dircetly adressed, e.g. “At the same time, other IAs can only access some integrated external applications and online services, e.g., searching google, checking the weather, etc., as concluded by the study of Li [14].” On the hand, if a reference to a para-phrased statement is provided, the numbers should be kept as they were, e.g. “For example, in manufacturing, TA task automation leads to streamlining production processes, reducing errors, and increasing efficiency [2]”. I’m very sorry for this confusion!

Now, as for my main concern, which relates to the use of two rather distinct research questions, I’m afraid but I have to stick with my argument that this significantly reduces the focus of the article. An article requires a clear research goal. Having two rather unrelated research questions defeats this goal. I totally agree with the authors’ key message that intelligent agents are used for task automation and that there are no heuristics to be followed to increase the usability of such agents. And, I also believe that this is a relevant field of research. However, a research question which investigates the general state of usability heuristics for different domains (i.e., RQ 2) does not really add much value to this overall research goal. In other words, what do I learn from reading 5 pages (30% of the entire article) of literature review looking at usability heuristics that are used in other domains and settings if the only usability heuristics which are actually relevant for the research at hand concern heuristics which target intelligent agents, in particular those that focus on task automation. If the focus of the article is to present a literature review on the current state of usability heuristics in general, and for which domains they have been developed, such an approach would be ok, but the presented goal is to focus on IAs for task automation. If there is a lack of such heuristics (which there is, I agree) then simply state this and maybe report on the small number of heuristics for IAs in general. It does not require a literature review covering all domains to support this argument. It rather requires an in-depth and thorough analysis of those heuristics which include some sort of IA and thus at least touch upon this topic. So, in other words, if this part of the literature review should really stay, then I’d strongly recommend to at least focus on whether these other studies include some sort of intelligent agent part. I guess that many of the heuristics focusing on mobile apps, virtual learning environments, websites etc. also have some sort of intelligent agent perspective and/or task automation. From this, one could then extract some initial ideas for a set of heuristics, which more specifically focus on IAs for task automations and thus justify a need for a more general framework for IA usability heuristics. With this, I believe, the article can be brought back into focus.

Furthermore, I believe that the discussion requires some more editing as it holds a number of statements which are not supported by the presented review: That is,

692: “This review study concludes that one of the major limitations of mobile-based task automation systems and applications is usability.” => I don’t see evidence for this conclusion. Such a conclusion could be drawn from a study which evaluates the use of mobile-based task automation. But the presented study did not do this.

700: “This study also suggests that usability is an essential aspect of task automation intelligent agents, as it has the potential to make human life easier.” => how is this suggestion supported by the study?

704: “This study also concludes that unfortunately, the absence of a usability evaluation methods by researchers has have resulted in low adoption of these products among users.” => there is no evidence whatsoever in the article which merits this conclusion.

Finally, I’d recommend one more round of proof reading as the article still suffers from significant language disfluencies and grammar mistakes.

Another round of proof reading is recommended.

Author Response

Response to Reviewer 1 Round 2 Comments

 

Point 1: I appreciate that the authors have considered most of my formal suggestions. As for my remark concerning the references, I believe there was a misunderstanding. I meant to include the names of authors in case they are directly addressed, e.g. “At the same time, other IAs can only access some integrated external applications and online services, e.g., searching Google, checking the weather, etc., as concluded by the study of Li [14].” On the hand, if a reference to a para-phrased statement is provided, the numbers should be kept as they were, e.g. “For example, in manufacturing, TA task automation leads to streamlining production processes, reducing errors, and increasing efficiency [2]”. I’m very sorry for this confusion!

Response to point 1: The authors would like to thank the reviewer for their comments and for clarifying more clearly in round 2. This has helped us a lot in better understanding the comments. The said mistakes have been cleared and updated in the manuscript.  

Point 2: Now, as for my main concern, which relates to the use of two rather distinct research questions, I’m afraid but I have to stick with my argument that this significantly reduces the focus of the article. An article requires a clear research goal. Having two rather unrelated research questions defeats this goal. I totally agree with the authors’ key message that intelligent agents are used for task automation and that there are no heuristics to be followed to increase the usability of such agents. And, I also believe that this is a relevant field of research. However, a research question which investigates the general state of usability heuristics for different domains (i.e., RQ 2) does not really add much value to this overall research goal. In other words, what do I learn from reading 5 pages (30% of the entire article) of literature review looking at usability heuristics that are used in other domains and settings if the only usability heuristics which are actually relevant for the research at hand concern heuristics which target intelligent agents, in particular those that focus on task automation. If the focus of the article is to present a literature review on the current state of usability heuristics in general, and for which domains they have been developed, such an approach would be ok, but the presented goal is to focus on IAs for task automation. If there is a lack of such heuristics (which there is, I agree) then simply state this and maybe report on the small number of heuristics for IAs in general. It does not require a literature review covering all domains to support this argument. It rather requires an in-depth and thorough analysis of those heuristics which include some sort of IA and thus at least touch upon this topic. So, in other words, if this part of the literature review should really stay, then I’d strongly recommend to at least focus on whether these other studies include some sort of intelligent agent part. I guess that many of the heuristics focusing on mobile apps, virtual learning environments, websites etc. also have some sort of intelligent agent perspective and/or task automation. From this, one could then extract some initial ideas for a set of heuristics, which more specifically focus on IAs for task automations and thus justify a need for a more general framework for IA usability heuristics. With this, I believe, the article can be brought back into focus.

Response to point 2: The authors really thank and appreciate the reviewer for their critical analysis of the manuscript. After extensive discussion with the authors and trying to understand the narrative of the reviewer, the authors have made changes to the manuscript in terms of research questions and done additions to the literature review for better understanding. The research questions are expanded for readers to understand the research's motives and further clarify this work's objectives. The changes have been made to the manuscript with the “Track Changes” option and new changes could be easily identified. We would like to request the reviewer to kindly take their precious time and evaluate our changes.

The changes are done in:

RQ1: updated research question is “What is the state-of-the-art in Task Automation Intelligent Agents? and whether these intelligent agents use any usability guidelines in the development process.”

RQ2: updated research question is “What are the existing methods and techniques for developing usability heuristics and for which domains they have been developed? and are there any domain-specific usability heuristics for evaluating intelligent agents?”

Similarly, in the literature review and analysis section, categories of domains have also been rewritten for better understanding and to explain the difference between intelligent agents and the domains for which usability heuristics are developed in the last half-decade.

Furthermore, I believe that the discussion requires some more editing as it holds a number of statements which are not supported by the presented review: That is,

Point 3: 692: “This review study concludes that one of the major limitations of mobile-based task automation systems and applications is usability.” => I don’t see evidence for this conclusion. Such a conclusion could be drawn from a study which evaluates the use of mobile-based task automation. But the presented study did not do this.

Response to point 3: we thank the reviewer for pointing out these mistakes. As per the comment, the discussion section is edited for clarity. Below is the updated statement:

“This review study concludes that one of the major limitations or causes of inadaptability of such mobile-based task automation systems and applications is the unavailability of domain-specific usability heuristics for developers and designers to develop easy-to-use and user-friendly systems.”

 

Point 4: 700: “This study also suggests that usability is an essential aspect of task automation intelligent agents, as it has the potential to make human life easier.” => how is this suggestion supported by the study?

Response to point 4: as per the reviewer’s comment, this statement has been rewritten for better understanding. Below is the edited statement:

“This study also suggests that the human-computer interaction community needs to give more attention to developing systematic domain-specific usability heuristics, such as for task automation intelligent agents because these systems have the potential to make human life easier. To effectively utilize this potential, usability is an essential aspect to consider during the designing and development of such systems and applications.”

Point 5: 704: “This study also concludes that unfortunately, the absence of a usability evaluation methods by researchers has have resulted in low adoption of these products among users.” => there is no evidence whatsoever in the article which merits this conclusion.

Response to point 5: this statement was rewritten from the introduction section, however, after discussion among the authors, this statement has been removed from the discussion section.

Finally, I’d recommend one more round of proofreading as the article still suffers from significant language disfluencies and grammar mistakes.

We would like to mention that this article has been proofread by 3 authors and a premium version of Grammarly has also been used to identify the grammatical mistakes.

Comments on the Quality of English Language:

Another round of proof reading is recommended.

 

Author Response File: Author Response.pdf

Reviewer 2 Report

I recommend to accept this paper.

The quality of English language is good.

Author Response

The authors would like to thank and appreciate the reviewer for taking some time to review our manuscript and giving their constructive comments. 

Reviewer 3 Report

Thank you for the authors for addressing my concerns.

I still did not understand the rationale for categorizing the heuristic development methods for automation. How come literature reviews is one of the categories of heuristic development methods? What is the message that the authors are giving to the reader from this Table 1. That is the reason, significance of the study should be written in the discussion or results and analysis section. 

Author Response

Response to Reviewer 3 Round 2 Comments

 

Point 1: Thank you for the authors for addressing my concerns. I still did not understand the rationale for categorizing the heuristic development methods for automation. How come literature reviews is one of the categories of heuristic development methods? What is the message that the authors are giving to the reader from this Table 1. That is the reason, the significance of the study should be written in the discussion or results and analysis section. 

Response point 1: I would like to thank the reviewer for their comment. We would take this opportunity to clarify the methodology deeply for readers to better understand its significance. In clarification, the categorization of development methods was taken from a previous systematic literature review by Daniela Quinones and Cristian Rusu in 2017. In their study, they did an exhaustive review of 73 studies related to usability heuristics for specific domains. Their objective was to identify the approaches that are used to create usability heuristics and whether a formal and systematic process is involved or not. Their study included papers published between 2006 and 2016. Their study also concluded that “The creation of heuristics is mainly based on existing heuristics, literature reviews, usability problems, and guidelines.” Below is the DOI for the article published in ELSEVIER.

DOI: https://doi.org/10.1016/j.csi.2017.03.009

The comment about Table 1 has allowed us to rewrite and modify the table for better understanding. In clarification, table 1 shows the categories of the methodology used in previous studies between 2018 and 2023 for the development of usability heuristics. A new column has been added “References” to make readers understand better which papers have used which approach to developing usability heuristics. 

Lastly, I would like to thank the reviewer for recommending that we rewrite the study's significance in the manuscript's discussion section. The significance of the study has been added on lines 675-684.

The authors of the manuscript thank the reviewer wholeheartedly for their constructive comments.

Author Response File: Author Response.pdf

Round 3

Reviewer 1 Report

Thanks for considering my recommendations. I still believe that it is a bit far-fetched to join these two research questions, yet I think your recent edits have addressed my concerns adequately and thus I'd judge the article to be ready for publication. 

There are still some minor language mistakes.

Back to TopTop