You are currently viewing a new version of our website. To view the old version click .
Laws
  • Article
  • Open Access

12 September 2024

China’s Legal Practices Concerning Challenges of Artificial General Intelligence

and
School of Law, Nankai University, Tianjin 300350, China
*
Author to whom correspondence should be addressed.

Abstract

The artificial general intelligence (AGI) industry, represented by ChatGPT, has impacted social order during its development, and also brought various risks and challenges, such as ethical concerns in science and technology, attribution of liability, intellectual property monopolies, data security, and algorithm manipulation. The development of AI is currently facing a crisis of trust. Therefore, the governance of the AGI industry must be prioritized, and the opportunity for the implementation of the Interim Administrative Measures for Generative Artificial Intelligence Services should be taken. It is necessary to enhance the norms for the supervision and management of scientific and technological ethics within the framework of the rule of law. Additionally, it is also essential to continuously improve the regulatory system for liability, balance the dual values of fair competition and innovation encouragement, and strengthen data-security protection systems in the field of AI. All of these will enable coordinated governance across multiple domains, stakeholders, systems, and tools.

1. Background

Nowadays, the world is at a historical intersection of a new round of technological revolution and industrial transformation. Following industrialization and informatization, intelligence has become a new developmental trend of the era (Sun and Li 2022). Driven by national policies that promote the digital economy and the demand for high-quality economic development, AI technology and industry have maintained rapid advancement. As technological innovation becomes more active and industrial integration deepens, technologies such as intelligent automation, recommendations, search, and decision-making have deeply integrated into enterprise operations and social services, which brings significant economic and social benefits. In summary, artificial general intelligence (AGI) is playing an increasingly crucial role in optimizing industrial structures, enhancing economic activity, and aiding economic development (Guo and Hu 2022).
Generally speaking, artificial intelligence refers to algorithms or machines that achieve autonomous learning, decision-making, and execution based on a given amount of input information. The development of AI is built on the improvement of computer processing power, the advancements in algorithms, and the exponential growth of data (Cao and Fang 2019). Since John McCarthy first proposed the concept of artificial intelligence in 1956, the progress of AI has not always been smooth. It has experienced three periods of prosperity driven by machine learning, neural networks, and internet technologies, as well as two periods of stagnation due to insufficient computing power and imperfect reasoning models (Jiang and Xue 2022). With the deepening implementation of AI and the recent popularity of technologies like GPT-4, a new wave of artificial intelligence-generated content (AIGC) has emerged, demonstrating the capabilities of AGI. However, generative artificial intelligence (GAI) has also raised concerns due to its inherent technical flaws and issues like algorithmic black boxes, decision biases, privacy breaches, and data misuse, leading to a crisis of trust. Although AGI has not developed into a fully mature product, compared to GAI, it possesses a higher level of intelligence. It is expected to bring more convenience to human life, while it is also likely to trigger a more severe trust crisis.
In this context, the key to addressing the challenges of AGI development lies in providing a governance framework that balances ethics, technology, and law (Zhao 2023). This framework should respect the laws of technological development while aligning with the requirements of legal governance and the logic of scientific and technological ethics. However, both theoretical research and practical experience indicate that the current governance of AGI lacks specificity, systematicity, comprehensiveness, and a long-term perspective. So, it is urgently needed to use systematic scientific legal methods to ensure and promote a positive cycle between technological breakthroughs and high-level competition. This approach should aim to integrate technological, industrial, institutional, and cultural innovation, and advance the innovative development of AGI as well.
This article uses issues arising from representative GAI products and services as examples, based on which it discusses the support elements of data, algorithms, and computing power in the training of GAI models and, then, extends the argument to explore AGI. Additionally, it discusses how to safeguard the innovative development of AGI by examining the current situation of China’s response to these challenges. Based on this analysis, the article proposes legal solutions to promote the innovative development of AGI in the future, with the aim of enriching theoretical research in this field.

2. Challenges of Generative Artificial Intelligence Technology

Science and technology are the primary productive forces, and scientific and technological progress is an indispensable driver of industrial development. With advancements in technologies such as GAI, people have discovered that AI is capable of accomplishing tasks previously unimaginable. However, people have also realized that the safety challenges, which are posed by AI’s development and its deep integration into daily life, are becoming increasingly complex.
Artificial intelligence is mainly divided into specialized artificial intelligence and general artificial intelligence. Specialized artificial intelligence, also known as “narrow AI” or “weak AI”, refers to AI programmed to perform a single task. It extracts information from specific data sets and cannot operate outside the designed task scenarios. Specialized AI is characterized by its strong functionality but poor interoperability. General artificial intelligence (AGI), also known as “strong AI”, “full AI”, or “deep AI”, possesses general human-like intelligence, which enables it to learn, reason, solve problems, and adapt to new environments like a human. AGI can address a wide range of issues without the need for specially encoded knowledge and application areas.
With the emergence of large models like GPT-4 that demonstrate powerful natural language-processing capabilities, the possibility of achieving AGI with “big data model + multi-scenario” has increased. Although no technology has yet fully reached the level of AGI, some scholars believe that certain generative AI models have initially achieved a level close to AGI (Bubeck et al. 2023).
Currently, the security issues brought by GPT-3.5, characterized by autonomous intelligence, data dependency, the “algorithmic black box”, and “lack of interpretability”, have attained widespread attention. If technology products that truly meet AGI standards emerge, even more significant security challenges could be brought, potentially having more severe consequences and broader impacts on national security, social ethics, and individual life and property safety. Therefore, it is essential to explore the specific risks posed by generative AI to find ways to ensure that the innovative development of GAI benefits human society without causing harm.

2.1. Ethical Risks in Science and Technology

Scientific research and technological innovation must adhere to the norms of scientific and technological ethics, which are crucial for the healthy development of scientific activities. Currently, generative AI can generate content in text, image, audio, and video formats, and their application fields are extremely broad. But, the lack of established usage norms for this technology poses ethical risks, leading to distrust in the application of AI. This issue is especially serious during the transition from weak AI to strong AI, where AI’s increasing autonomy presents unprecedented challenges to traditional ethical frameworks and the fundamental nature of human thought.
GAI services excel in areas such as news reporting and academic writing, making the technology an easy tool for creating rumors and forging papers. The academic journal “Nature” has published multiple analytical articles on ChatGPT, discussing how large language models (LLMs) like ChatGPT could bring potential disruptions to academia, the potential infringement risks associated with generated content, and the necessity of building usage regulations (Stokel-Walker and Van Noorden 2023). It is foreseeable that the lack of clear ethical standards could lead to frequent occurrences of academic fraud, misinformation, and rumor spreading, thereby destroying trust in AI technology. This distrust could even extend to situations where AI technology is not used (Chen and Lin 2023).
Moreover, the responses provided by GAI through data and algorithms are uncertain. With the continuous iteration of GAI, some technologies have been considered to have reached the level of AGI, approaching human-like intelligence. As GAI develops further, it raises profound questions about whether the technology will independently adopt ethical principles similar to those of humans. To address this situation, some scholars have proposed human factors engineering and suggested incorporating it into the research and development of AGI, with the aim of guiding the development of AI, like advanced GAI, towards being safe, trustworthy, and controllable (Salmon et al. 2023).

2.2. Challenges in Responsibility Allocation

Safety incidents of AGI can not only affect the security of devices and data but also lead to serious production accidents that endanger human life. In recent years, incidents caused by autonomous driving technologies from companies like Google, Tesla, and Uber have intensified the ethical debate over whether humans or AI should take responsibility. If the responsibility allocation is not clearly defined beforehand, the difficulty of obtaining remedies and defending rights after infringement may increase, resulting in public distrust of AI. Moreover, it could make AI products develop in ways that deviate from social ethics and legal norms, ultimately threatening economic and social order.
In terms of laws, the legal and ethical standards of AI are still underdeveloped, resulting in many infringement incidents. In the U.S., three artists—Sarah Andersen, Kelly McKernan, and Karla Ortiz—filed a lawsuit against AI companies and platforms such as Stability AI and Midjourney, claiming that the data used in their training processes infringed on the copyrights of millions of artists1. When such incidents occur, determining the liable party and correctly allocating responsibility becomes a major challenge. The concept of the “responsibility gap”, introduced by Andreas Matthias in 2004, refers to the inability of algorithm designers and operators to foresee future outcomes during the autonomous learning process of the algorithm. This implies that humans do not have sufficient control over the actions of machines and cannot be held liable for the fault of machine builders and operators under the traditional assignment of fault (Matthias 2004).
In terms of application, GAI technology has “universal accessibility”. Its usage and cost thresholds are not so high, so a wide range of people can easily access and use the technology. This accessibility increases the risk of infringement incidents. For example, spreading rumors can be easily facilitated by AI, making it simple to create and disseminate false information. Some users may intentionally spread and create false information and rumors to boost web traffic, which increases the frequency of misinformation dissemination (Chen and Lin 2023).

2.3. Intellectual Property Challenges

With the widespread application of GAI, concerns have arisen regarding the legality of the training data sources for large AI models and whether the content they generate can be considered as a work.
While it is widely accepted that GAI, as a computer program, can be protected as intellectual property, significant controversy remains over the intellectual property issues related to massive data training. The lack of clear boundaries or definitions regarding intellectual property in data can easily result in a “tragedy of the commons”. Conversely, overemphasizing the protection of data as intellectual property can hinder technological development, resulting in an “anti-commons tragedy” (Peng 2022). Scholars are actively discussing how to balance the protection of intellectual property within data and the advancement of technological innovation.
Furthermore, there is debate over whether the content generated by AI can be recognized as a work. GAI produces content based on extensive data training and continuously refines the output according to user feedback. Therefore, it is challenging to determine that the content is entirely autonomously generated by AI, which leads to disputes. Some scholars argue that GAI mimics the human creative process, and its content is not a product of human intellect. However, in practice, a few countries do recognize computer-generated content as a work. For instance, the UK’s Copyright, Designs and Patents Act (CDPA) Article 9(3) provides that content generated by a computer can be protected as intellectual property.
Finally, though there is no consensus on the issue of ownership of content generated by AI. Most scholars agree that AI itself cannot be the rights holder of a work. In the U.S. “Monkey Selfie” copyright dispute case, a U.S. District Judge ruled that the copyright law does not extend its protection to animals, and a work must be the creation of a “human” to be considered a copyrighted work2. This case indicates that the U.S. does not recognize copyright for non-human entities, which means that AI, as a non-human entity, also does not enjoy copyright protection. Similarly, Article 2 of China’s Copyright Law stipulates that works created by Chinese citizens, legal entities, or unincorporated organizations are protected by copyright. It is evident that AI is also not considered a subject of copyright under Chinese law.

2.4. Data-Related Risks

Data elements have immense potential value. If this value is fully realized by the following pattern “potential value—value creation—value realization”, it can significantly drive social and economic development (B. Chen 2023). As users become more aware of protecting their data privacy and as the risks associated with data breaches increase, finding a balance between data protection and data-driven AI research is crucial for achieving public trust in AI technology.
In GAI technology, the first type of risk is the inherent security risk of the training data. The training outcomes of GAI models directly depend on the input data. However, due to limitations in data-collection conditions, the proportion of data from different groups is not balanced. For example, current training corpora are predominantly in English and Chinese, making it difficult for other languages with fewer speakers to be integrated into the AI world, thus presenting certain limitations.
The second type of risk arises from the processes of data collection and usage. With the advancement of internet technology, the amount of personal information has increased and has become easier to collect. The growing scale of data is both the key to achieving GAI services and a primary source of trust crises. The training-data volume for GPT-4 has reached 13 trillion tokens (Petal and Wong 2021). Although mainstream GAI service providers have not disclosed their data sources, it is known that these data mainly come from public web scraping datasets and large human language datasets. It is a challenge to access and process such data in a secure, compliant, and privacy-protective manner, demanding higher standards for security technical safeguards.

2.5. Algorithm Manipulation Challenges

In the AI era, the uncontrollability brought by the statistical nature of algorithms, the autonomous learning ability of AI, and the inexplicability of deep-learning black-box models have become new factors leading to a crisis of user trust. From the perspective of technical logic, algorithms play a core role in the hardware infrastructure and applications of GAI, shaping user habits and values (L. Zhang 2021). Due to the black-box problem in the decision-making processes of AI models, this uncontrollable technical defect brings most of the algorithmic challenges.
First, algorithms lack stability. GAI faces various attack methods that target its data and systems, such as virus attacks, adversarial attacks, and backdoor attacks. For instance, feeding malicious comments into the model can effectively influence the recommendation algorithm, resulting in inaccurate recommendation outputs.
Second, the explainability of algorithms needs improvement. Machine-learning algorithms, particularly those based on deep learning, are essentially end-to-end black boxes. On the one hand, people are unclear about the processes and operational mechanisms within large models that contain vast amounts of parameters. On the other hand, it is also unclear which specific data from the database influence the AI algorithm’s decision-making process.
Lastly, algorithmic bias and discrimination issues remain unresolved. The emergence of these is influenced by multiple internal and external factors. Internally, if the algorithm developers set discriminatory factors or incorrectly configure certain parameters during the development stage, the algorithm will inherently exhibit biased tendencies. Externally, since GAI optimizes its content based on feedback, any biases and discrimination present in the feedback data will affect the final generated content.

5. Conclusions

Strategic emerging industries are the new pillars of future development. The legal landscape in the digital era should anticipate the future form of global governance for AGI. The era of AGI is not far off, with GAI technologies advancing rapidly in a short period. Their wide range of applications highlights the revolutionary significance of AGI, making the AI industry a new focal point of global competition. However, the innovative development of the AGI industry also faces challenges related to technological ethics, intellectual property, accountability mechanisms, data security, and algorithmic manipulation, which undermine the trustworthiness of AI.
Therefore, it is necessary to further develop a legal regulatory framework for the AI industry and improve the governance ecosystem for technological ethics. By introducing relevant codes of conduct and ethical guidelines, we can promote the healthy and sustainable development of the AI industry within a legal framework. Addressing the aforementioned issues requires strategic research and the pursuit of feasible technical solutions. By establishing technological ethics standards, improving the system for regulating liability, protecting competition while encouraging innovation, enhancing AI data-security measures, and standardizing algorithmic regulation in the AI field, the obstacles on the path to the innovative development of AGI can eventually be removed.

Author Contributions

Conceptualization, B.C.; methodology, B.C.; writing—original draft preparation, B.C.; writing—review and editing, B.C. and J.C.; project administration, B.C.; funding acquisition, B.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the major project in Judicial Research of the Supreme People’s Court of P.R.C. (grant number ZGFYZDKT202317-03) and the key project of Humanities and Social Science study from the Ministry of Education of P.R.C. (grant number 19JJD820009).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All data underlying the results are available as part of the article and no additional source data are required.

Conflicts of Interest

The authors declare no conflicts of interest.
1
See Andersen, et al. v. Stability AI Ltd., et al. Docket No. 3:23-cv-00201 (N.D. Cal. Jan. 13, 2023).
2
See Naruto v. Slater, No. 16-15469 (9th Cir. 2018).

References

  1. Bubeck, Sébastien, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, and et al. 2023. Sparks of Artificial General Intelligence: Early Experiments with GPT-4. Available online: https://arxiv.org/abs/2303.12712 (accessed on 30 June 2024).
  2. Cao, Jianfeng, and Lingman Fang. 2019. The Path and Enlightenment of EU’s Ethics and Governance of Artificial Intelligence. AI-View 4: 40–48. [Google Scholar]
  3. Cao, Jianfeng. 2023. Towards Responsible AI: Trends and Outlook for AI Governance in China. Journal of Shanghai Normal University (Philosophy & Social Sciences Edition) 52: 5–15. [Google Scholar]
  4. Chen, Bing, and Siyu Lin. 2023. Facing the Trust Crisis in Artificial Intelligence and Accelerating the Development of Trustworthy AIGC. First Financial Daily, April 24. Available online: https://m-yicai-com.translate.goog/news/101739971.html?_x_tr_sl=zh-CN&_x_tr_tl=en&_x_tr_hl=en&_x_tr_pto=sc (accessed on 30 June 2024).
  5. Chen, Bing. 2023. Scientific Construction of Data Element Trading System. Frontier 6: 68–80. [Google Scholar]
  6. Chen, Jidong. 2023. Theoretical System and Core Issues of Artificial intelligence Law. Oriental Law 1: 62–78. [Google Scholar]
  7. CNR. 2021. Jingdong Exploration Research Institute and China Information and Communications Technology Academy Officially Released the First Domestic “Trusted Artificial Intelligence White Paper”. Available online: https://tech.cnr.cn/techph/20210709/t20210709_525530669.shtml (accessed on 28 August 2024).
  8. Ding, Xiaodong. 2024. Legal Regulation of Artificial Intelligence Risks—An Example from the EU Artificial Intelligence Act. Science of Law (Journal of Northwest University of Political Science and Law) 42: 3–18. [Google Scholar]
  9. Fan, Yuji, and Xiao Zhang. 2022. The Mode Transformation, Selection, and Approach of Data Security Governance. E-Government 4: 119–29. [Google Scholar]
  10. Guo, Yanbing, and Lijun Hu. 2022. Study on the Impact of Al and Human Capital on Industrial Structure Upgrading: Empirical Evidence from 30 Chinese Provinces. Soft Science 5: 21–26. [Google Scholar]
  11. Hu, Wei. 2015. Rules and Ways of Liability on Mining Damage. Journal of Political Science and Law 2: 121–28. [Google Scholar]
  12. Hu, Xiaowei, and Li Liu. 2024. The Full Process Regulatory Logic and Institutional Response of Artificial Intelligence Risks. Study and Practice 5: 22–30. [Google Scholar]
  13. Jiang, Lidan, and Lan Xue. 2022. The Current Challenges and Paradigm Transformation of New-Generation Al Governance in China. Journal of Public Management 2: 6–16. [Google Scholar]
  14. Li, Chengliang. 2010. Eco-injury: From the Perspective of Law of Torts. Modern Law Science 1: 65–75. [Google Scholar]
  15. Li, Xiuquan. 2017. Challenges and Countermeasures of Safety, Privacy, and Ethics in AI Applications. Science & Technology Review 15: 11–12. [Google Scholar]
  16. Matthias, Andreas. 2004. The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata. Ethics and Information Technology 6: 175–83. [Google Scholar] [CrossRef]
  17. Peng, Hui. 2022. Its Logical Structure and Boundary Setting of Data Ownership: From the Perspective of the “Tragedy of the Com-mons” and “Tragedy of the Anti-commons”. Journal of Comparative Law 1: 105–19. [Google Scholar]
  18. Petal, Dylan, and Gerald Wong. 2021. GPT-4 Architecture, Infrastructure, Training Dataset, Costs, Vision, MoE. Available online: https://www.semianalysis.com/p/gpt-4-architecture-infrastructure?nthPub=11 (accessed on 28 August 2024).
  19. Pi, Yong. 2024. The Risk Prevention and Control Mechanism in the EU’s Artificial Intelligence Law and Their Implications for China. Journal of Comparative Law 4: 67–85. [Google Scholar]
  20. Salmon, Paul M., Chris Baber, Catherine Burns, Tony Carden, Nancy Cooke, Missy Cummings, Peter Hancock, Scott McLean, Gemma J. M. Read, and Neville A. Stanton. 2023. Managing the Risks of Artificial General Intelligence: A Human Factors and Ergonomics Perspective. Human Factors and Ergonomics in Manufacturing & Service Industries 33: 355–429. [Google Scholar]
  21. Shi, Jiayou, and Zhongxuan Liu. 2022. The Rule of Law Path of Ethical Governance of Science and Technology: Taking the Governance of Genome Editing as an Example. Academia Bimestris 5: 185–95. [Google Scholar]
  22. Stokel-Walker, Chris, and Richard Van Noorden. 2023. What ChatGPT and Generative AI Mean for Science. Nature 614: 214–16. [Google Scholar] [CrossRef]
  23. Sun, Weiping, and Yang Li. 2022. On the Ethical Principles of the Development of Artificial Intelligence. Philosophical Analysis 1: 6–17. [Google Scholar]
  24. The Paper. 2023. Miaoya Camera’s Privacy Policy Sparks Controversy: How Should Generative AI Be Regulated. Available online: https://www.thepaper.cn/newsDetail_forward_24080503 (accessed on 4 September 2024).
  25. Yang, Jianjun. 2024. The Development of Trustworthy AI and the Construction of Legal Systems. Oriental Law 4: 95–108. [Google Scholar]
  26. Zhang, Linghan. 2021. Algorithm Accountability in Platform Regulation. Oriental Law 3: 24–42. [Google Scholar]
  27. Zhang, Yuanke. 2021. Exclusive Interview with Academician Ji-feng He: The Most Important Leverage for Achieving Trustworthy Artificial Intelligence Lies in People. First Financial Information, July 16. Available online: https://view.inews.qq.com/k/20210716A07WBI00?web_channel=wap&openApp=false (accessed on 30 June 2024).
  28. Zhao, Jingwu. 2023. The Theoretical Misunderstanding and Path Transition in the Application Risk Governance of Generative Artificial Intelligence Technology. Jingchu Law Review 3: 47–58. [Google Scholar]
  29. Zhou, Jun. 2022. Practice and Exploration of Trusted AI in Digital Economy. Heart of Machine, March 31. Available online: https://cloud.tencent.com/developer/article/1969715 (accessed on 28 August 2024).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.