Next Article in Journal
Questioning Strict Separationism in Unsettled Times: Rethinking the Strict Separation of Church and State in United States Constitutional Law
Previous Article in Journal
Restorative Practice and Therapeutic Jurisprudence in Court: A Case Study of Teesside Community Court
 
 
Article
Peer-Review Record

Introduction of the First AI Impact Assessment and Future Tasks: South Korea Discussion

by Jonggu Jeong
Reviewer 1:
Reviewer 2:
Reviewer 3:
Submission received: 27 June 2022 / Revised: 21 September 2022 / Accepted: 23 September 2022 / Published: 29 September 2022

Round 1

Reviewer 1 Report

It must be underlined that the article is of a rather descriptive nature as it describes the current state of legislative attempts in the field of AI and some basic concerns it arises. It shows these concerns from the South Korean perspective, which is interesting. This is not a deep theoretical study, but the contribution that summarizes some current developments in AI&law area. Nevertheless, as this is still a new and not very deeply elaborated field of research, I opt for publishing this essay.

Author Response

I really appreciate the Reviewer’s comment and are grateful for your feedback.

Reviewer 2 Report

The aim of the study is “to examine the artificial intelligence impact assessment introduced in South Korea”. The subject of the study is noteworthy, especially as the examinet act is one of the first acts which formulate the legal basis for conducting benefits and costs analysis before and during the use of artificial intelligence systems. In the literature, this topic is usually addressed in terms of the conditions that such systems should meet, and not in terms of formal rules for carrying out such an assessment. In this respect, the study should be considered desirable.

The main weakness of the study is a very brief analysis of the subject act. In fact, Author (Authors) devotes for the examination of the act only 2 from 10 pages, with the last paragraph on page 7 practically quoting the content of Art. 56 of Framework Act on Intelligent Information. On the next page there is a table from a separate source (it largely repeats the content of Art. 56). Generally, the only creative contribution of the Author (Authors) is to draw conclusions (point 6) pointing to the shortcomings of the South Korean solution. Moreover, the Author (Authors) does not compare the conclusions with the observations made in the previous parts of the study (no logical connection of the part concerning the acts discussed in point 4 and point 5).

 

An interesting, but omitted by the Author (Authors) issue is the justification of the solutions adopted in the act. They differ from those proposed, for example, in the Proposal of Artifical Intelligence Act (EU), in which high-risk systems were distinguished, the assessment and certification of which would be obligatory.

The Author (Authors) devote the first part of the article to the characteristics of various methods used in artificial intelligence (Point 2. "The Emergence and Development of Artificial Intelligence Technology (Benefit)"), which is justified. In particular, the issue of Transformers indicates that the Author (Authors) is familiar with the latest achievements in the field of AI methods. In the next section, the Author (Authors) presents the problems related to the use of AI systems identified in the literature, mainly limited to problems related to system bias, impact on the labor market and the environment. Aside from the considerations on the "replacement" of workers with robots (page 5, line 8), it should be noted that the main direction of activities nowadays is to rely on the concept of "cobots", in which robots are to cooperate with humans, also many reports on the potential possibilities of human replacement by machines focuses on the performance of specific categories of tasks, and not on full substitutability of jobs (see e.i.: analysis of such reports in: Kai-Fu Lee, AI Superpowers. China, Silicon Valey, and the New World Order, 2018, Chapter 6).

The Author (Authors) in point 4 presents other acts on the basis of which it would be possible to assess the impact of AI, providing them with comments. These include the DPIA, the USA Algorithmic Accoutability Act (not adopted), and the works of the EU and the Council of Europe in the field of AI. It seems that since the Author (Authors) decided to present a wider context and look at other acts, this part should also be enriched with the analysis of the Proposal of Artifical Intelligence Act (EU). Although this act has not been passed yet, due to the importance of this proposal and its pioneering nature, it should be referred to, especially since it touches upon the issues set out in the article.

The study is interesting and worth publishing, the Author (Authors) should, however, extend the analysis of Art. 56 of Framework Act on Intelligent Information. In its present shape, it is unsatisfied and is not placed in the context of the considerations in the first part of the article.

Author Response

Comments by Reviewer

1. The main weakness of the study is a very brief analysis of the subject act. In fact, Author (Authors) devotes for the examination of the act only 2 from 10 pages, with the last paragraph on page 7 practically quoting the content of Art. 56 of Framework Act on Intelligent Information. On the next page there is a table from a separate source (it largely repeats the content of Art. 56). Generally, the only creative contribution of the Author (Authors) is to draw conclusions (point 6) pointing to the shortcomings of the South Korean solution. Moreover, the Author (Authors) does not compare the conclusions with the observations made in the previous parts of the study (no logical connection of the part concerning the acts discussed in point 4 and point 5).

The study is interesting and worth publishing, the Author (Authors) should, however, extend the analysis of Art. 56 of Framework Act on Intelligent Information.

Response: I appreciate the Reviewer’s comment and are grateful for your feedback. As per the Reviewer’s comments, I have added more content with deep analysis.

- Section 4. Preceding Discussion for AI Impact Assessment (2 pages, from p. 5 to p. 7), Section 5. Introduction of AI Impact Assessment in South Korea (2020) (2 pages, from p. 7 to p. 9), Section 6. Conclusions: Significance and Limitations of the Social Impact Assessments of Intelligent Information Services (1 pages, from p. 9 to p. 10) were added.

- I have rewritten Sections 4, 5, and 6 to be logically connected.

- I have rewritten Sections 5 to extend the analysis of 56 of the Framework Act on Intelligent Informatization. 

2. The Author (Authors) in point 4 presents other acts on the basis of which it would be possible to assess the impact of AI, providing them with comments. These include the DPIA, the USA Algorithmic Accountability Act (not adopted), and the works of the EU and the Council of Europe in the field of AI. It seems that since the Author (Authors) decided to present a wider context and look at other acts, this part should also be enriched with the analysis of the Proposal of Artifical Intelligence Act (EU). Although this act has not been passed yet, due to the importance of this proposal and its pioneering nature, it should be referred to, especially since it touches upon the issues set out in the article.

Response: I also appreciate the Reviewer’s comment and are grateful for your feedback. As per the Reviewer’s comments, I have added a paragraph as follows: [page 6-7, section 4.6. AI Impact Assessment Act, emphasis added]

4.6. AI Impact Assessment Act

The Algorithmic Accountability Act was introduced in the US Senate in 2019 but failed to be enacted. However, as in the EU's GDPR, this is a significant example of an attempt to use the benefits of impact assessment to oversee artificial intelligence and other automated decision-making systems. Any company that uses an automated decision-making system should submit an impact assessment on fairness, bias, discrimination, and personal information protection and security. Since there are various types of automated decision-making systems, the single regulatory framework proposed by the bill is not effective enough to adequately regulate multiple AI systems. To ensure effective policy implementation, it was necessary to legislate a sectoral approach regarding supervisory regulations (Chae 2020).

It is also necessary to refer to the conformity assessment of the European Union AI Act of 2021 (draft). The European Union has previously divided artificial intelligence into four types (that is, unacceptable risk/high risk/limited risk/minimal risk). Among them, to use high-risk AI, conformity assessment was required in advance. This is different from evaluating the impact of its use as it only targets the technology itself, such as safety certification. However, given that it is a mandatory evaluation to use artificial intelligence technology, it is necessary to consider the conformity assessment in preparing the impact assessment bill in the future.

3. Aside from the considerations on the "replacement" of workers with robots (page 5, line 8), it should be noted that the main direction of activities nowadays is to rely on the concept of "cobots", in which robots are to cooperate with humans, also many reports on the potential possibilities of human replacement by machines focuses on the performance of specific categories of tasks, and not on full substitutability of jobs (see e.i.: analysis of such reports in: Kai-Fu Lee, AI Superpowers. China, Silicon Valley, and the New World Order, 2018, Chapter 6).

Response: I truly appreciate the Reviewer’s comment and are grateful for your feedback. As per the Reviewer’s comments, I have changed 1 sentence and added 1 reference as follows: [page 4, section 3.2., emphasis added]

3.2. Social and Environmental Risks

Artificial intelligence technology will develop and become more widely accepted by society. However, artificial intelligence could increase the unemployment rate and reduce wages. As such, human labor can be partially replaced by machines like the example of cobot (Kai-Fu Lee 2018). Automating these tasks can have a negative impact on employment (Webb 2019). For example, the number of customer service staff will decrease by 2029 as the number of automated devices increases (Statistics of Labor 2021). A different perspective is also presented. Jobs are likely to be replaced by artificial intelligence, but this is only for the long run (Cone 2019). A bigger risk is that advances in artificial intelligence technology could create a relatively large wage gap between new high-paying jobs and low-wage jobs that are threatened by replacement (Salomons 2019). This aspect can be detrimental to groups with limited access to technology (Holbrook 2018).

Reviewer 3 Report

This article covers an important topic in artificial intelligence and in the legal field: AI impact assessment, and on the case of a country in particular: South Korea.

However, despite the general interest of the theme announced by the authors, this article is rather disappointing for the reader for several reasons.

On the one hand, the parts that constitute the heart of the authors' work (sections 5 and 6) are very uninformative. Section 6 (conclusions) concerns only a weak development (usually limited to a single paragraph, and without example or bibliographical reference) of the 5 points already announced in section 1.2 or in the abstract.

On the other hand, a set of elements come to make reading difficult:

- an unfortunate copy-paste is found in the first paragraphs of sections 1.1 and 1.2

- Table 1 is unreadable: we do not understand what corresponds to "Subject", "Target", "Evaluation Items", and "Procedure" with the right part of the table

- the information regarding the technical parts (on neural networks) is not correct, for example "a hidden layer" should be changed to "one or more hidden layers" (first paragraph of section 2.4) 

- there is important information that is forgotten, for example section 4.1 does not specify that the GDPR concerns the European Union

- the bibliographical references do not follow the rules of use, e.g., 1. (Aslam 2022) -> (Aslam et al., 2022); 2. (Crawford 2019) -> (Crawford & Paglen, 2019); mix between lowercase and uppercase in the writing of names, and sometimes first names are used instead of surnames, etc.

Author Response

1. The parts that constitute the heart of the authors' work (sections 5 and 6) are very uninformative.

Response: I appreciate the Reviewer’s comment and are grateful for your feedback. As per the Reviewer’s comments, I have added more content with deep analysis.

- Section 4. Preceding Discussion for AI Impact Assessment (2 pages, from p. 5 to p. 7), Section 5. Introduction of AI Impact Assessment in South Korea (2020) (2 pages, from p. 7 to p. 9), Section 6. Conclusions: Significance and Limitations of the Social Impact Assessments of Intelligent Information Services (1 pages, from p. 9 to p. 10) were added.

- I have rewritten Sections 4, 5, and 6 to be logically connected.

- I have rewritten Sections 5 to extend the analysis of 56 of the Framework Act on Intelligent Informatization.

 

2. Section 6 (conclusions) concerns only a weak development (usually limited to a single paragraph, and without example or bibliographical reference) of the 5 points already announced in section 1.2 or in the abstract. An unfortunate copy-paste is found in the first paragraphs of sections 1.1 and 1.2

Response: I appreciate the Reviewer’s comment and are grateful for your feedback. As per the Reviewer’s comments, I have deleted section 1.2 and added more content into section 6.

- I deleted section 1.2.

- I added more content into section 6. [page 9, section 6. Conclusions: Significance and Limitations of the Social Impact Assessments of Intelligent Information Services, emphasis added]

The social impact assessment on intelligent information service in South Korea includes all risk-based approaches that have been tried so far. Data Protection Impact Assessment is included by considering not only privacy (Article 56.1.2.) but also the impact on information security (Article 56.1.4.). Human Right Impact Assessment is also included by considering the safety and reliability of intelligent informatization services (Article 56.1.1.). As the social impact assessment addresses the impact on society and the environment that the existing impact assessment system did not have, a more extensive evaluation became possible. However, it was not specified in detail at the same level as the Risk Assessment Tool of the European Union or Canada. This is because the social impact assessment on South Korea's intelligent information service is in its initial stage. The sub-legislation has not yet been enacted. There are five limitations including this, which will be described below.

 

3. Table 1 is unreadable: we do not understand what corresponds to "Subject", "Target", "Evaluation Items", and "Procedure" with the right part of the table.

Response: I appreciate the Reviewer’s comment and are grateful for your feedback. As per the Reviewer’s comments, I have deleted the table and rewrite section 5 (Introduction of AI Impact Assessment in South Korea).

5.1. Introduction of Social Impact Assessment on Intelligent Informatization Services

In the process of providing services to users, artificial intelligence systems may have negative impacts on the environment or society owing to unintended consequences. Even though a bypass design is possible with the same performance, excessive energy consumption can cause excessive environmental costs by increasing carbon emissions. In the process of using artificial intelligence systems in society, they can replace a lot of work performed by people, which can lead to social problems such as unemployment and poverty.

To achieve sustainable development by maximizing the benefits of artificial intelligence and minimizing the costs, South Korea introduced a social impact assessment on intelligent informatization services. In the Framework Act on Intelligent Informatization in 2020, Article 56 (Social Impact Assessments of Intelligent Information Services) is an impact assessment that only targets artificial intelligence (South Korea 2020). This is the world's first case of impact assessment legislation implemented based on the NEPA style.

5.2. Subject and Target of the Social Impact Assessment on Intelligent Informatization Services 

The State and local governments may survey and assess the following matters with respect to how the utilization and spread of intelligent information services have far-reaching effects on citizens’ lives that affect the society, economy, culture, and citizens’ daily lives. The specific contents include: (1) Safety and reliability of intelligent information services, (2) Impacts on the information culture, such as closing the digital divide, protection of privacy and ethics for the intelligent information society, (3) Impacts on the society and the economy, such as employment, labor, fair trade, industrial structure, rights and interest of users, (4) Impacts on information protection; and (5) Other impacts of intelligent information services, on the society, economy, culture and citizens’ daily lives.

The subject of social impact assessment of intelligent information services is the national and local governments. They can implement the assessment arbitrarily. It is not the duty of the state and local governments. Korea has a technology impact assessment system, which is distinguished in that technology impact assessment is mandatory annually by the government. The target of social impact assessment of intelligent information services is that the use and spread of intelligent information services, which have a huge effect on people's lives, have an impact on society, economy, culture.

The target of the social impact assessment of intelligent information services is the impact of the use and spread of intelligent information services, which have a huge effect on people's lives, society, economy, culture. Therefore, it is distinguished from technology impact assessment, which is aimed at predicting the future due to technological development. Technology itself is only target of the technology impact assessment and cannot be the target of the social impact assessment.

5.3. Evaluation Items of the Social Impact Assessments of Intelligent Information Services

The most important factor of social impact assessment on intelligent information services is the safety and reliability of intelligent informatization services (Article 56.1.1.). In fact, safety and reliability are the most controversial items while forming the basis of intelligent informatization services including artificial intelligence. Safety can refer to technical, administrative, and physical safeguards. Considering the relationship with the impact on information security (Article 56.1.4.), this should be understood from the perspective of industrial safety, not information security. Reliability can be interpreted as the governance system itself that encompasses the entire AI ethical discourse. It is designed to address the risk of human rights infringement. It may include a human rights impact assessment.

Social Impact Assessments of Intelligent Information Services address the impact of artificial intelligence on society. This is because, among the evaluation items, there are effects on information culture (Article 56.1.2.) and social and economic effects (Article 56.1.3.). For the impact on information culture (Article 56.1.2.), this assessment refers to the information gap bridging, privacy, and intelligent information society ethics. For the impact on information culture (Article 56.1.3.), this assessment refers to the information gap bridging, privacy, and intelligent information society ethics. It also deals with the impact of artificial intelligence on the environment. This is because among the evaluation items, there is an impact of the intelligent information service on the society, economy, culture, and daily life of the people (Article 56 (1) 5). They are evaluated to deal with social and environmental risks. It was a point where any type of AI impact assessment that has been proposed so far has not been able to adequately deal with.

5.4. Evaluation procedure of the Social Impact Assessments of Intelligent Information Services

After national or local government decides to investigate and evaluate the social impact of artificial intelligence, the Minister of Science and Technology has obligation to disclose the results of the social impact assessment. After that, the Minister can recommend necessary measures, such as improving the safety and reliability of the intelligent information service, to national agencies and business operators. It is distinguished from the existing environmental impact assessment in terms of effectiveness. This is because the publicly announced results of the social impact assessment are not reflected in policy but only recommended.

Further detailed procedures were not stipulated in the law. However, since the social impact assessment for intelligent information services is a risk management system, the evaluation-communication-management process is expected to be applied when materialized later. At this time, the evaluation is to judge the risk of the artificial intelligence service based on the social impact assessment. As it is a government-led impact assessment, it may lead to overregulation, so it is necessary to secure the independence, objectivity, and expertise of the assessment agency. And communication means collecting the opinions of stakeholders, experts, or general citizens by disclosing the results of the social impact assessment. And management is to recommend measures necessary to improve reliability to suppress risks induced by artificial intelligence services. This can be understood as governance that can systematically manage risks beyond simply reflecting it into policies like environmental impact assessment.

 

4. The information regarding the technical parts (on neural networks) is not correct, for example "a hidden layer" should be changed to "one or more hidden layers" (first paragraph of section 2.4).

Response: I appreciate the Reviewer’s comment and are grateful for your feedback. As per the Reviewer’s comments, I have changed as the comments mentioned.

- a hidden layer => one or more hidden layers

 

5. The bibliographical references do not follow the rules of use.

Response: I appreciate the Reviewer’s comment and are grateful for your feedback. As per the Reviewer’s comments, I also have changed as the comments mentioned.

- (Aslam 2022) -> (Aslam et al., 2022)

- (Crawford 2019) -> (Crawford et al., 2019)

Round 2

Reviewer 3 Report

This second version of the article is much better than the previous one. The authors have taken the remarks into account to fill in important parts that were missing.

The arguments presented by the authors are more precise and better referenced.

There are still some minor changes to be made in this version. In particular, the authors incorrectly use upper and lower case letters in their article, and in particular their references.

For instance:

"(Canter 1982) L. W. Canter, ENVIRONMENTAL IMPACT ASSESSMENT, Impact Assessment, 1:2, 1982, 6-40."

must be changed in:

"(Canter 1982) L. W. Canter, Environmental Impact Assessment, Impact Assessment, 1:2, 1982, 6-40.  (and this even if Canter, the author of this article, had capitalized his title.)

Another example of a change to make:

"(Cristina 2019) Christy Kuesel. Leading online database to remove 600,000 images after art project reveals its racist bias. 2019. The Art Newspaper. Available at https://www.artsy.net/article/artsy-editorial-online-image-database-will-remove-600-000-pic-tures-art-project-revealed-systems-racist-bias"

The article mentioned does not exist (or no longer exists) online.

On the other hand, an article with the same title and in the same journal is available with the following reference:

(Ruiz 2019) Cristina Ruiz. Leading online database to remove 600,000 images after art project reveals its racist bias. 2019. The Art Newspaper. Available at https://www.theartnewspaper.com/2019/09/23/leading-online-database-to-remove-600000-images-after-art-project-reveals-its-racist-bias

Please check all the bibliographical references, both in terms of the writing of the references and in terms of the accuracy of the citations (all the authors must be cited, type of publication, web links, etc.)

Author Response

[Response to Reviewer 3 Comments]

Really Thank you for your precious comments.

I changed my paper like below:

1. Reviewer mentioned points

p. 10. References

[Before] (Canter 1982) L. W. Canter, ENVIRONMENTAL IMPACT ASSESSMENT, Impact Assessment, 1:2, 1982, 6-40.

[After] (Canter 1982) L. W. Canter, Environment Impact Assessment, Impact Assessment, 1:2, 1982, 6-40.

p. 10. References

[Before] (Cristina 2019) Christy Kuesel. Leading online database to remove 600,000 images after art project reveals its racist bias. 2019. The Art Newspaper. Available at https://www.artsy.net/article/artsy-editorial-online-image-database-will-remove-600-000-pic-tures-art-project-revealed-systems-racist-bias

[After] (Ruiz 2019) Cristina Ruiz, Leading online database to remove 600,000 images after art project reveals its racist bias, 2019, The Art Newspaper, Available at https://www.theartnewspaper.com/2019/09/23/leading-online-database-to-remove-600000-images-after-art-project-reveals-its-racist-bias

2. Others (Same patterns like above)

p.10. References

[Before] (Aslam et al. 2022) Nida Aslam, Irfan Ullah Khan, Samiha Mirza, Alanoud AlOwayed, Fatima M. Anis, Reef M. Aljuaid and Reham Baageel. Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI). 2022.

[After] (Aslam et al. 2022) Nida Aslam, Irfan Ullah Khan, Samiha Mirza, Alanoud AlOwayed, Fatima M. Anis, Reef M. Aljuaid and Reham Baageel, Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI), Sustainability, 2022, 14, 7375. https:// doi.org/10.3390/su14127375

p.10. References

[Before] (Crawford et al. 2019) Kate Crawford and Trevor Paglen. Excavating AI. 2019

[After] (Crawford et al. 2019) Kate Crawford and Trevor Paglen, Excavating AI: the politics of images in machine learning training sets, AI&Society, June 2021.

p.10. References

[Before] YOON CHAE. US AI regulation guide: Legislative overview and practical considerations. 2020. The Journal of Robotics, Artificial Intelligence & Law.

[After] Yoon Chae, US AI regulation guide: Legislative overview and practical considerations, 2020, The Journal of Robotics, Artificial Intelligence & Law, Volume 3, No. 1, January–February 2020.

p.10. References

[Before] (CAHAI-PDG 2021) AD HOC COMMITTEE ON ARTIFICIAL INTELLIGENCE (CAHAI) POLICY DEVELOPMENT GROUP, Human Rights, Democracy and Rule of Law Impact Assessment of AI systems, 2021.

[After] (CAHAI-PDG 2021) AD HOC Committee on Artificial Intelligence (CAHAI) Policy Development Group, Human Rights, Democracy and Rule of Law Impact Assessment of AI systems, Council of Europe, 2021.

p.10. References

[Before] (EIGE 2016) European institute for gender equality, "GENDER IMPACT ASSESSMENT", 2016, p. 8.

[After] (EIGE 2016) European institute for gender equality, Gender Impact Assessment, 2016, p. 8.

p.10. References

[Before] (European Commission 2019) Policy and investment recommendations for trustworthy Artificial Intelligence. 2019. availa-ble at https://digital-strategy.ec.europa.eu/en/library/policy-and-investment-recommendations-trustworthy-artificial-intelligence

[After] (European Commission 2019) European Commission, Policy and investment recommendations for trustworthy Artificial Intelligence, 2019, available at https://digital-strategy.ec.europa.eu/en/library/policy-and-investment-recommendations-trustworthy-artificial-intelligence

p.10. References

[Before] (Gebru et al. 2021) Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell. On the Dangers of Stochastic Parrots: Can. 2021, In Proceedings of the 2021 ACM Conference on Fairness, Accountability, 610-623. Available At https://dl.acm.org/doi/10.1145/3442188.3445922

[After] (Gebru et al. 2021) Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, 2021, In Proceedings of the 2021 ACM Conference on Fairness, Ac-countability, 610-623. Available At https://dl.acm.org/doi/10.1145/3442188.3445922

p.10. References

[Before] (Goodfellow 2016) Ian Goodfellow, Deep Learning, 2016, MIT Press.

[After] (Goodfellow 2016) Ian Goodfellow, Deep Learning, 2016, MIT Press.

p.10. References

[Before] (Holbrook 2018) Nithya Sambasivan, Jess Holbrook, Toward responsible AI for the next billion users. 2018. Interactions, 26(1), 68-71. Available at https://dl.acm.org/doi/10.1145/3298735

[After] Nithya Sambasivan, Jess Holbrook, Toward responsible AI for the next billion users, 2018, Interactions, 26(1), 68-71, Avail-able at https://dl.acm.org/doi/10.1145/3298735 

3. Including above, I totally changed all references

- Thank you again for your kind comments

Author Response File: Author Response.pdf

Back to TopTop