Abstract
Rapid advancements in technology have resulted in the proliferation of self-driving vehicles, which have already presented significant challenges to the field of legal science. In the context of automated decision-making, the question of liability is invariably pertinent. The question of whether liability should be assigned to a non-human entity or to a group of people is a contentious one. Furthermore, the question of which entity should be held liable for compensation for damage caused and which entity should be criminally liable remains unresolved. In the context of self-driving vehicles operating at a lower level of automation, the identification of the driver’s liability, ostensibly a straightforward undertaking, gives rise to a multitude of intricate ethical dilemmas. In addition to the prevailing assumptions regarding liability, which have previously been discussed in detail in the literature, the study also addresses the issue of transparency in automated decision-making related to legal remedies.
1. Introduction
In almost all areas of society and science, discourse on technological development and its concomitant changes in the world is inevitable. The popularity of the subject is not, however, indicative of its irrelevance, since innovations engender numerous new opportunities in welfare states and facilitate people’s lives, as well as benefiting the market economy. The advent of artificial intelligence (AI) has precipitated a paradigm shift in various domains, with one of the most pronounced manifestations being the emergence and proliferation of self-driving vehicles. The introduction of the autonomous vehicle (AV) has given rise to a plethora of issues of a highly interdisciplinary nature [] (p. 45). While the development process already necessitates a multidisciplinary approach, it will become even more intricate to address economic, technical, social and legal perspectives and expectations when it comes to the eventual participation of truly self-driving vehicles in transport.
2. Understanding Automation Levels and Legal Liability in Autonomous Vehicles
AV represents a vehicle capable of sensing its environment and navigating safely with minimal or no human intervention. The advent of driverless vehicles is poised to effect a paradigm shift in the manner in which people and goods are conveyed, thereby making a substantial contribution to the evolution of contemporary society and the societal framework of the future []. The conceptualisation of self-driving cars (a term encompassing buses and trucks that do not run on a fixed track) necessitates an understanding of the term “self-driving”, for which an appreciation of the levels of automation serves as a foundational starting point [] (p. 953). The categorisation of these levels is subject to variation across different organisations. The most prominent categorisation systems are those employed by the National Highway Traffic Safety Administration (NHTSA) and the Society of Automotive Engineers (SAE) in the United States [,]. The different levels are associated with the assessment of civil and criminal liability.
Level 0. of automation is characterised by the absence of automation. In this particular instance, the human element is indisputable. The driver controls the vehicle’s acceleration, braking, and steering, essentially governing all its functions, with the assistance of warning sounds and safety intervention systems. The presence of automatic emergency braking is also included in level 0. Level 1 is driver support. It is evident that the system does not regulate steering, acceleration, or braking in unison. Nevertheless, under specific driving conditions, the vehicle is capable of assuming control of both the steering wheel and the pedals, though not concurrently. In this instance, the driving process is assisted, yet the operator retains complete control. The most effective examples of first-level automation include adaptive cruise control and parking assist. Level 2 of the automation scale refers to partial self-driving functionality. The driver is able to disengage their hands from the steering wheel. At this level, there are set-up modes where the car can, under certain conditions, control both pedals and the steering wheel concurrently. It is incumbent upon the driver to maintain constant vigilance over the vehicle and to take appropriate action if the situation demands. Level 3 of automation is defined as conditional automation. This approach is tantamount to achieving full autonomy. The vehicle is equipped with a suite of features that, under specific circumstances, can assume complete control of the driving functions and the monitoring of the driving environment. However, the responsibility for driving control must be relinquished immediately upon the system’s request. At this level, the vehicle is capable of determining when to change lanes and how to react to dynamic events in the road environment, with the human driver serving only as a backup system. This is due to the fact that the system demands a high level of responsibility and attention from the driver, who must be able to react quickly to the system’s request for intervention [].
It is important to note that legal liability at the first three levels is uniform, and there are no particular concerns about this. It can be posited that the natural person has effective control over the vehicle. At these levels of automation, the concept of genuine self-driving functionality remains illusory. In the context of product liability, the legislator imposes various forms of liability on the individual, unless there is clear evidence of malfunction or defect, in which case the manufacturer is held accountable under the provisions of product liability law. Human intervention typically prevents accidents and damage in such cases [] (pp. 31–42). It is important to note that product liability in such a complex system of relationships can be quite challenging for enforcers, even at low levels of automation []. This legislative position is logical, insofar as technological innovations at these levels are intended to promote human comfort and reduce the potential for human error, thereby reducing the number of accidents [,,].
It has been observed that the level 4 model is analogous to level 3, yet it is considered to be significantly more secure. The vehicle is capable of operating autonomously under specific conditions, thereby negating the need for human intervention. Should the system encounter an unresolved issue, it is programmed to request human assistance. A salient distinction is that in the absence of a response, the passengers’ safety is not compromised. This development marks a significant milestone in the realm of automotive technology, as it approaches the capabilities of a fully self-driving vehicle. From a legal perspective, the most significant challenge is level 5, which refers to full automation. At this level, the vehicle is capable of self-driving, with no requirement for human intervention, though this option is available. All driving tasks are performed by the computer system on any road, in any conditions, whether there is a human in the car or not []. It is this author’s opinion that the regulation of liability at levels 4–5 can no longer be accommodated within the existing legal framework.
3. Dilemmas of Full Automation
Some arguments have been posited within the extant literature on the subject of AI its legal status. At this juncture, however, it is not possible to consider AI a legal entity []. With regard to criminal liability, in the event of a road accident or other criminal offence involving a self-driving vehicle, a paradoxical situation is presented, with societal expectations exerting the greatest pressure in such cases. It is important to note that the victim, or their relatives, have the right to claim compensation and damages. Furthermore, they are entitled to expect the imposition of a sanction from the criminal power of the State. It is imperative to elucidate the causal factors that precipitated the incident, operating under the assumption that the accident was attributable to a software defect in the AI system, and that the AI itself cannot be held accountable. The black box in self-driving cars can assist in clarifying this issue []. Furthermore, the question arises as to whether the subject of crime can even be addressed in this context, given the inability to identify the perpetrator [] (pp. 10–21) and [].
From the perspective of the individual occupying the autonomous vehicle, i.e., the owner, it can be argued that the imposition of liability, particularly in the context of criminal prosecution, is both unjust and unconstitutional. Whilst the prospect of holding the manufacturer liable under both civil and criminal law may appear to offer a satisfactory solution, it does not take into account a significant factor. It is improbable that manufacturers will assume the risk of facing financial and criminal consequences if the software malfunctions, even during the testing phase [] (pp. 93–102). It is imperative to address liability concerns, given the inherent limitations of achieving a fully accident-free AV [] (p. 100). In relation to the legal strategy that would enable the concurrent integration of self-driving vehicles operating at levels 4–5 of automation with those of automated vehicles functioning at levels 0–3 within the context of traffic, the regulatory framework remains underdeveloped. Presently, regulatory trends do not appear to indicate a trajectory towards solutions that entirely eliminate the human element. The EU AI Act has established standards for the governance of AI that have the potential to influence ethical and regulatory frameworks on a global scale. The EU AI Act [] is a unique and comprehensive piece of legislation that prohibits the use of certain AI for the time being, while requiring strict risk management, transparency and accountability criteria for permissibility [] (p. 1). In the regulatory sphere of AI, a consensus exists among nations worldwide regarding the paramount importance of human control and the necessity of systematic decision review. In the absence of human intervention, the ethical decision-making capabilities of AIs should be programmed to ensure optimal decision-making in situations involving moral dilemmas (Judith Jarvis Thomson: The Trolley Problem) []. However, it is important to recognise that there is no such thing as an entirely correct or ‘just’ decision. The question of responsibility in the context of AI, particularly in scenarios of full autonomy, necessitates elucidation. Presumably, AI will be programmed to formulate decisions that are economically optimal, rational, and within the bounds of the regulatory environment. This is exemplified by the deep deterministic policy gradient [] algorithm. However, it should be noted that this does not guarantee the protection of the occupant’s life in all circumstances. This issue remains unresolved from the standpoint of fundamental rights []. The establishment of a broader international presence would be a significant milestone, transcending the scope of European engagement and extending to a more extensive international context. The establishment of a unified global framework would facilitate the implementation of consistent safety and ethical regulations for autonomous vehicles, as well as the systematic governance of liability issues [] (p. 9). As is evident in the second paragraph of the text, the EU AI Act does not specifically address the issue of liability either. With regard to AVs, it can be posited that at levels 4–5 of automation, AVs can be considered high-risk. This is due to the fact that they are used in critical infrastructure and transport and are considered high-risk in terms of human health, safety, and fundamental rights [].
4. The Social Impact of AI’s Error
A review of extant research and statistical data indicates that Automated Vehicles will have a significant impact on road safety in the future. It is evident that the determination of precise rates is not feasible through a solitary study or research, as each measurement employs distinct criteria and models [] (p. 106003). In the case of lower levels of automation (1–3), it is evident that human characteristics and capabilities play a significant role in the occurrence of accidents.
With regard to accidents, human error has been shown to be responsible for as much as 90% of cases [] (pp. 532–537). At levels 4–5, the high level of automation in self-driving vehicles means that there are no symptoms such as fatigue, irritability, slower reactions, or driving while drunk or intoxicated. However, it is important to acknowledge the inherent fallibility of software. As previously stated, the occurrence of an accident can be attributed exclusively to a software malfunction. The allocation of culpability for accidents caused by software failures to the manufacturers or programmers is not only an erroneous solution from a market perspective, but also an erroneous solution from a legal perspective in the context of AI. In the context of AI, the algorithmic process is not explicitly constrained by human rules or determined by a sequence of human decisions; rather, it derives its conclusions from the available data. This process is based on machine learning and can lead to results that are unknown or unpredictable even to the software developer, especially for deep learning varieties [].
Further consideration of the issue of liability reveals another aspect to be explored: what impact on society and the development of artificial intelligence can be expected if the legislator were to impose liability on the vehicle operator under the rules of liability for dangerous premises? This is because the user, as the owner of the self-driving vehicle, is the one who effectively operates the dangerous premises [] (p. 52). The imposition of user or supervisor liability enables the company to circumvent direct accountability and to further develop the application or withdraw the product that has made an erroneous decision. This can be regarded as a business decision, which may even elicit a favourable response from users. Consequently, the legislator does not exert pressure on AI developers to enhance, evaluate and ensure the safety of their products prior to their commercial release [] (p. 101). It has been demonstrated that this can act as a significant impediment to innovation. Law generally responds to changes in the prevailing social situation, to the needs of society [] (p. 147), so one could conclude that concerns will be addressed when the hypothetical situation arises. It can be posited that, in essence, a claim arising from an accident or injury may be regarded as a form of compensation for the harm suffered by the victims. In the case of an accident, it is conceivable that the victims may perceive criminal liability as a form of satisfaction. However, the issue of liability in the context of AI also pertains to the right of the individual against whom liability is imposed to a remedy and thus his fundamental right to a fair trial.
The predominant critique of AI pertains to a challenge referred to as the “black box problem.” In this scenario, the developer regards the precise technical details of the software as proprietary information, thereby impeding the ability of the prosecution to effectively regulate its operation. Nevertheless, the absence of complete transparency hinders the implementation of effective redress, given that the rationale behind the decision to utilise the software in AV remains undisclosed. The advent of AI has precipitated a paradigm shift in the realm of law and society, impacting various facets of people’s daily lives. The advent of fully autonomous vehicles has given rise to concerns regarding their implications for individual life, limb and safety. Consequently, there is a necessity for the meticulous delineation of legal frameworks to ensure the safeguarding of fundamental rights prior to their implementation. The ethical considerations inherent in the development and application of artificial intelligence, particularly with regard to autonomous decision-making and responsibility, represent a significant challenge and a complex ethical area [] (p. 53). The present article puts forward the conclusion that the issue of liability remains unresolved, and that this necessitates the identification of a solution. The fundamental issue appears to be the element of uncertainty, which engenders a sense of vulnerability in the face of AI, particularly in the context of AVs. Anticipated technological advancements in the domain of automated vehicles are poised to result in a substantial decline in traffic accidents and fatalities. This prompts a critical evaluation of the implications for criminal law [] (p. 18).
5. Conclusions
As demonstrated in the article, the reasoning indicates that criminal liability should not be imposed on either the user or the producer. Consequently, the exclusion of criminal liability appears to be a viable solution. It is reasonable to hypothesise that social dissatisfaction will be negligible in comparison to the advantages offered by AVs. The law cannot allow for a complete absence of responsibility; however, from a criminal law perspective, even the dogmatic foundations are lacking, as a self-driving vehicle cannot commit a crime. A re-evaluation of this issue could prove to be a fruitful endeavour from the standpoint of ascertaining responsibility [] (p. 369). Following the establishment of objective liability, which is analogous to civil law, there has been a clash with numerous fundamental principles of criminal law. Consequently, in the absence of a superior solution, criminal liability currently lies with the natural person and artificial intelligence. This means that the self-driving car is merely a means of committing the crime. Currently, the responsibility for the actions of the AI system lies solely with the user. However, as AI development continues and evolves, particularly in the context of legal entities, there is a potential for the emergence of a hybrid form of liability. This would involve a joint responsibility of both the AI system and its operator for any actions taken [] (p. 58). “A pivotal role of transport systems is to ensure universal access to mobility. The advent of self-driving cars has the potential to transform door-to-door mobility for individuals who currently lack the financial means to own a car or who are unable to drive due to factors such as age, disability, or visual impairment. Furthermore, for those residing in areas that are not well-served by public transportation, self-driving cars could become a viable alternative, thereby addressing the accessibility challenges posed by existing transport systems. Consequently, the advent of self-driving cars has the potential to mitigate the disadvantages associated with mobility constraints [] (p. 176).”
In conclusion, the proposal regarding civil liability will be based on a resolution that has been adopted by the EU [,] (p. 67). In accordance with the principles of insurance, manufacturers of AVs would be obligated to contribute a specified amount to a fund established by the state at the time of marketing. Furthermore, a percentage of the purchase price would be allocated to this fund at the time of sale. In the event of damage to the AV that is clearly attributable to a software defect, reimbursement may be sought from the designated fund. The dilemmas delineated above pertain to a number of academic disciplines. It is reasonable to conclude that they present a significant challenge to engineers, lawyers and economists. The present article aims to contribute to informed decisions in a small segment of modern technologies. The key to a responsible legal response to emerging technology is a fundamental approach that does not merely assume risks, but is scientifically convinced of their existence [] (p. 4).
Funding
The research was supported by the European Union within the framework of the National Laboratory for Autonomous Systems (RRF-2.3.1-21-2022-00002).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Conflicts of Interest
The author declares no conflicts of interest.
References
- Tóth, T. Az önvezető járművekkel kapcsolatos jogi felelősség [Legal Liability and the Autonomous Vehicles]. Közlekedéstudományi Szle. 2021, 71, 45–52. [Google Scholar] [CrossRef]
- Yurtsever, E.; Lambert, J.; Carballo, A.; Takeda, K. A survey of autonomous driving: Common practices and emerging technologies. IEEE Access 2020, 8, 58443–58469. [Google Scholar] [CrossRef]
- Lukovics, M.; Udvari, B.; Zuti, B.; Kézy, B. Az önvezető autók és a felelősségteljes innováció [Autonomous Vehicles and Responsible Innovation]. Közgazdasági Szle. 2018, 65, 949–974. [Google Scholar] [CrossRef]
- SAE International. Surface Vehicle Recommended Practice (R) Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles; SAE International: Pittsburgh, PA, USA, 2018; Available online: https://www.sae.org/news/press-room/2018/12/sae-international-releases-updated-visual-chart-for-its-“levels-of-driving-automation”-standard-for-self-driving-vehicles (accessed on 16 May 2025).
- Reese, H. Autonomous Driving Levels 0 to 5: Understanding the Differences; Tech Republic: Berlin, Germany, 2016. [Google Scholar]
- Blain, L. Self-Driving Vehicles: What Are the Six Levels of Autonomy? New Atlas, 8 June 2017. Available online: https://newatlas.com/sae-autonomous-levels-definition-self-driving/49947/ (accessed on 16 May 2025).
- Karácsony, G. Inkább bízzunk a robotokban? A mesterséges intelligencia döntéseiért való emberi felelősség kritikája [Shall We Trust in the Robots Instead? Critics of Human Liability for the Decisions Made by AI]. Jog Állam Politika Jog-És Politikatudományi Folyóirat 2020, 12, 31–42. Available online: https://real.mtak.hu/214594/7/JAP_2020_KOLONSZAM_g-karacsony-gergely.pdf (accessed on 7 July 2025).
- Villasenor, J. Products Liability and Driverless Cars: Issues and Guiding Principles for Legislation; Brookings Institution: Washington, DC, USA, 2014. [Google Scholar]
- Scanlon, J.M.; Kusano, K.D.; Daniel, T.; Alderson, C.; Ogle, A.; Victor, T. Waymo simulated driving behavior in reconstructed fatal crashes within an autonomous vehicle operating domain. Accid. Anal. Prev. 2021, 163, 106454. [Google Scholar] [CrossRef] [PubMed]
- Ahangarnejad, A.H.; Radmehr, A.; Ahmadian, M. A review of vehicle active safety control methods: From antilock brakes to semiautonomy. J. Vib. Control 2021, 27, 1683–1712. [Google Scholar] [CrossRef]
- Bareiss, M.; Scanlon, J.; Sherony, R.; Gabler, H.C. Crash and injury prevention estimates for intersection driver assistance systems in left turn across path/opposite direction crashes in the United States. Traffic Inj. Prev. 2019, 20, S133–S138. [Google Scholar] [CrossRef] [PubMed]
- Keserű, B.A. A mesterséges intelligencia magánjogi mibenlétéről [The Civil Law Aspects of AI]. In Az Autonóm Járművek és Intelligens Rendszerek Jogi Vonatkozásai; Fazekas Judit, L., Gábor, K., Eds.; Universitas-Győr: Győr, Hungary, 2020; pp. 1–15. [Google Scholar]
- Feng, R.; Yao, Y.; Atkins, E. Smart Black Box 2.0: Efficient High-Bandwidth Driving Data Collection Based on Video Anomalies. Algorithms 2021, 14, 57. [Google Scholar] [CrossRef]
- Ambrus, I. A mesterséges intelligencia és a büntetőjog [AI and Criminal Law]. Állam És Jogtudomány 2020, 4, 4–23. [Google Scholar]
- Nichols, C. Liability Could Be Roadblock for Driverless Cars. The San Diego Union-Tribune, 30 October 2013. Available online: https://www.sandiegouniontribune.com/2013/10/30/liability-could-be-roadblock-for-driverless-cars/ (accessed on 19 May 2025).
- Goodall, N.J. Machine ethics and automated vehicles. In Road Vehicle Automation; Meyer, G., Beiker, S., Eds.; Springer: Cham, Switzerland, 2014; pp. 93–102. [Google Scholar] [CrossRef]
- Béla, C. Az autonóm járművek és a termékfelelősség, avagy mennyiben indokolt a termékfelelősségi szabályok reformja. [Autonomous vehicles and product liability, or the case for reforming product liability rules]. Jog Állam Polit. Jog És Polit. Folyóirat 2021, 3, 87–103. [Google Scholar]
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). Available online: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng (accessed on 21 June 2025).
- Ijuo, I. Understanding the EU AI ACT: Key Requirements and Provisions. The Emerging Law Series, 2025; p. 1. Available online: https://www.researchgate.net/publication/388721574_Understanding_The_EU_AI_ACT_Key_Requirements_and_Provisions (accessed on 21 June 2025).
- Thomson, J.J. The Trolley Problem. Yale Law J. 1985, 94, 1395–1415. [Google Scholar] [CrossRef]
- Rizehvandi, A.; Azadi, S.; Eichberger, A. Decision-Making Policy for Autonomous Vehicles on Highways Using Deep Reinforcement Learning (DRL) Method. Automation 2024, 5, 564–577. [Google Scholar] [CrossRef]
- Rosenberg, J. Autonomous Vehicles. In Electrical and Computer Engineering Design Handbook; Tufts University: Medford, MA, USA, 25 March 2015; Available online: https://sites.tufts.edu/eeseniordesignhandbook/2015/autonomous-vehicles/ (accessed on 22 June 2025).
- Taha, M.A. AI Ethics in Autonomous Vehicles: Balancing Innovation and Safety. Eurasian J. Theor. Appl. Sci. 2025, 1, 1–13. [Google Scholar]
- Sohrabi, S.; Khodadadi, A.; Mousavi, S.M.; Dadashova, B.; Lord, D. Quantifying the automated vehicle safety performance: A scoping review of the literature, evaluation of methods, and directions for future research. Accid. Anal. Prev. 2021, 152, 106003. [Google Scholar] [CrossRef] [PubMed]
- Fleetwood, J. Public health, ethics, and autonomous vehicles. Am. J. Public Health 2017, 107, 532–537. [Google Scholar] [CrossRef] [PubMed]
- Miért Kell Belelátnunk az AI Fekete Dobozába? [Why We Have to Look Into the Black Box of AI?] New Technology Online Magazin. 24 June 2023. Available online: https://newtechnology.hu/miert-kell-belelatnunk-az-ai-fekete-dobozaba/ (accessed on 22 June 2025).
- Karácsony, G. Okoseszközök—Okos Jog? A Mesterséges Intelligencia Szabályozási Kérdései [Smart Advices—Smart Law. Regulationary Issues of AI]. Available online: https://openaccess.ludovika.hu/nke/catalog/book/170 (accessed on 21 April 2025).
- Udvary, S. Az önvezető gépjárművek egyes felelősségi kérdései [Some Liability Questions of Autonomous Vehicles]. Pro Publico Bono—Magy. Közigazgatás 2019, 7, 146–155. [Google Scholar] [CrossRef]
- Papp, R. A mesterséges intelligencia etikai kihívásai. [The ethical challenges of artificial intelligence]. Jogelméleti Szle. 2025, 1, 53. [Google Scholar] [CrossRef]
- Ambrus, I. Az Autonóm Járművek és a Büntetőjogi Felelősségre Vonás Akadályai [Autonomous Vehicles and the Obstacles of Criminal Liability]. Available online: http://hdl.handle.net/10831/56638 (accessed on 22 June 2025).
- Maximilian, K. AI and Responsibility: No Gap, but Abundance. J. Appl. Philos. 2025, 42, 357–374. [Google Scholar] [CrossRef]
- Csemáné Váradi, E.; Balla, B. Mesterséges intelligencia és büntetőjogi felelősség [Artificial intelligence and criminal liability]. In Jogi Kihívások és Válaszok a XXI. Században; Zoltán, V., Ed.; 2024; pp. 50–63. Available online: https://jogikar.uni-miskolc.hu/files/29755/Jogi%20kihívások%20és%20válaszok%20a%20XXI.században%203..pdf (accessed on 31 May 2025).
- Lakatosné Novák, É. Mégis kinek a hibája? Egy önvezető jármű balesete kapcsán felmerülő felelősségi kérdések [Whose Fault is That? Liability Issues Caused by an Autonomous Vehicle]. In A Gazdasági Jogalkotás Aktuális Kérdései; Glavanits, J., Ed.; Dialóg Campus: Budapest, Hungary, 2019; p. 176. [Google Scholar]
- European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=oj:JOC_2018_252_R_0026 (accessed on 24 June 2025).
- Keserű, B.A. A 21. Századi Technológiai Változások Hatása a Jogalkotásra [The Technological Innovations of 21st Century and Their Impacts on Legislation]; Ludovika Egyetemi Kiadó: Budapest, Hungary, 2020; p. 67. [Google Scholar]
- Tóth, A. A mesterséges intelligencia szabályozásának paradoxonja és egyes jogi vonatkozásainak alapvető kérdései [The Paradox of Regulation on AI and Its Basic Legal Issues]. Infokommunikáció És Jog. 2019, 2, 3–9. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).