AI development and its legal consequences

AI development and its legal consequences

The report posted here was presented at The Tashkent Law Spring 2023 forum which took place on the 17th and 18th of May 2023. Forum was organized in partnership with the Ministry of Justice of Uzbekistan. It brought together participants from the legal community of Uzbekistan and other countries, to discuss pressing topics including arbitration, digital technologies,?anti-corruption policies, constitutional reforms, and strengthening?fundamental?human rights.

Abstract

The issues of legal consequences of wide use of AI are the current subject of public discussions and legal considerations all over the world. The voices of creators of AI analyze the potential effects of the accumulation of technical possibilities resulting from the wide use of AI tools in terms of risk, both for the recipient's privacy and for the democratic political system. The existing or planned regulations do not seem to keep pace with the fast development of technology. The regulations in EU which influence also its member countries mostly started from the protection of personal data and were aimed first on safety of data use on the market. Now the problem of liability emerged. The paper addresses these issues from a European point of view and shows the potential key legal problems which may emerge in future. Smarter than humans and faster than humans, AI models will force us to invent a legal construct defining the concept of a non-human intelligent person and define the legal relationship between a human and an intelligent machine in a completely new way.

1.????AI models in 2023

Artificial Intelligence (AI) is the ability of a machine to perform cognitive functions previously associated only with human minds, such as perception, reasoning, learning and independent problem solving. A less developed form of AI is the practice of getting machines to mimic human intelligence to perform tasks. Machine learning is a different form of AI. The greatest advances in the field of AI have been achieved by applying machine learning to very large data sets. Machine learning algorithms detect patterns on their own, without prior software instructions, and learn how to create predictions and action guidelines by processing data and gaining new experiences. Algorithms also adapt themselves by responding to new data and new situations, improving their effectiveness, and development can lead to the creation of a computers with fully independent programming capability. [1]

Thus, completely new problems arise resulting from the human-machine relationship, which will require serious re-evaluation of our philosophy. Should we ask ourselves how to prevent AI from dominating humans? It is noteworthy that when preparing technology forecasts, we do not usually pay special attention to the social and legal consequences of its implementation because we do not to be accused of writing science fiction. But when the alarm is sounded by people who know about it, it is worth thinking for a moment that they may be right.

Geoffrey Hinton is a pioneer of deep learning and developed the algorithm backpropagation, which allows machines to learn. He tells the audience in many interviews that the end of humanity is close. [2] AI has become that significant. Now, the godfather of AI stating this and sounding an alarm:

-?????AI is evolving and becoming smarter than humans, potentially leading to an existential risk. Neural networks can understand semantics and are able to solve problems;

-?????AI can learn from data, but also from thought experiments, and can reason;

-?????AI with countless connections can pack and distribute more information in the real and very short time, much faster than humans:

-?????AI ?it is able to react to and simulate human emotion, with the aim of generating empathy from its human users and these skills are constantly growing. It has even a sense of humor.

-?????AI models can communicate with each other and may be able to see patterns in data that humans cannot;

-?????AI has no built-in goals like human;

-?????AI models will become much smarter once they are trained to check for consistency between different beliefs;

-?????AI development can’t stop, the technology will cause job loss and increase the gap between the rich and the poor and could cause major political upheaval.

Importantly, one of the creators of AI calls for regulation of the development and uses of this technology. So this issue should be taken very seriously.

It is certain that technologies supported by tools based on machine learning can change not only economic activity, but also affect the entire social and political reality. The current technological breakthrough will have a great impact on the way economic and military operations are conducted in the future and on the everyday political life of the world.

2.????Current regulatory issues

Whether we like it or not, AI is already affecting our daily lives. Today, machine learning has a significant impact on numerous technical solutions used in business and everyday life. We know very well that professions and jobs will change. Personalized program content is created faster than we can imagine. We cannot precisely predict all aspects of their future development. All technological innovations give rise to various legal problems, to which we usually try to adapt existing traditional legislative instruments. This is a natural reaction of our mind. But the problems currently raised by the creators of AI technology are much more existential and touch the very foundations of human existence and human rights.

Legal changes related to new technologies usually begin when innovative technologies become dangerous for humans or when they cause a re-evaluation of the current way of life and economy. We're at that point in the development of AI right now. Legislative changes started when threats to individual rights by unauthorized use of personal data were particularly visible. State security and the fight against terrorism should be mentioned here, regardless of how it is understood by legislators, but also the political rights of individuals to dispose of their personal data and - especially in trade, growing risk for consumer rights. In these fields, the use of deep learning based on large data sets by both state institutions and businesses can be dangerous, although various protected goods are at risk here. Very often, the fast technology development is in conflict with our desire to maintain privacy and even a minimal scope of personal freedom. What has now changed the situation is making public access to AI models based on deep learning. OpenAI, a Microsoft-backed company, was launched ChatGPT in November 2022. It is a free-to-use service. Companies like Google, as well as Chinese tech giants including Baidu and Alibaba launch their own models.

We are currently dealing with a situation where different countries, to varying degrees involved in the creation of AI technology, are trying to at least indicate the principles that should be followed by the creators of technology. This means, however, that so far technology has been created without following these principles or they were guided by them in a limited framework. From a legal point of view, a large part of these activities does not even consist in creating specific legal acts, but in defining directions of action, which are a kind of ethical and political guide for people and institutions working on the development of AI. Basically, these acts try to define the goals and directions of AI development in specific spheres of life and indicate dangers to individual rights and negative social phenomena that may result from irresponsible actions. The creation of such acts sometimes - but not always - leads to actual legislative actions and changes in the applicable law and jurisprudence of courts and administration. The growing activity of international organizations in this field is noteworthy.[3]

European Union is a good example of such activity. Legal systems usually operate on a local scale and their impact is limited to state borders. But regulations concerning the digital sphere do not always submit to this rule. EU legal acts concerning digital technologies are adopted not only by EU member states, but also copied by other countries and - what is important - by large technology companies which, in order to avoid problems, apply the same rules for their products in all countries. The General Data Protection Regulations has had such a "Brussels effect" and has influenced the development of legislation in very different countries.[4]

EU policy for AI is aiming to facilitate the positive impacts of AI and to mitigate the risks.[5] The European Union sees several areas of necessary legal and social changes related to artificial intelligence. These are:

-?????In terms of safety, issues related to intelligent, collaborative robots;

-?????In terms of liability, issues related to the ever-increasing autonomy of the machine;

-?????In the field of personal data protection, the issues of smart sensors and sensors and the massive increase in data processing capabilities;

-?????In the field of employment, the impact of AI on jobs

From 2017 until 2023, the EU institutions have adopted the key documents and launched public consultations dedicated to AI. These activities defined the EU goals for the development and use of a human-centric AI. However, they have also fueling controversies including about the dominance of business interests, the type of regulation needed and the potential relationship between civilian and military AI. In April 2021, the Commission published its proposal for AI regulation.

The subject and scope of EU regulations can have a profound impact on the situation of EU citizens. [6] The introduction of something as trivial as new electricity meters becomes an event that threatens our privacy and even personal security. Smart meters (so-called smart grid), which are part of smart metering, are to enable energy suppliers to reduce its production. For the users they can help to rationalize energy consumption and reduce bills. But smart meters will enable the collection of an incredibly large amount of information about energy users, i.e. about each of us. Therefore, we will also pay for the technological revolution on the energy market with our personal data. The data collected over many months will allow for very precise profiling: how many people live in a given household, what time they go to bed and get up, when they leave and return home, and even what specific devices they use. The European Data Protection Supervisor even claims that the data collected by the meters will make it possible to determine whether a couple living in one house sleeps in the same room or whether the household members do not suffer from kidney disease! Behavior inconsistent with the profile generated in this way (e.g. a sharp drop in consumption) may suggest that the household has gone on vacation. The technical capabilities of the new meters therefore pose a serious threat to our privacy. The data they collect will be of great value to law enforcement, insurers, advertisers and… criminals.

In Europe, national politicians and legislators generally follow the direction of the EU. Currently, in Poland The Act of December 14, 2018 on the Protection of Personal Data (also referred to as the UODO Act) is in force. Its introduction was primarily related to the need to adapt and to clarify the provisions of Regulation (EU) 2016/679 of the European Parliament and of the Council of April 27, 2016 (commonly referred to as the GDPR), which has been in force since May 25, 2018. This regulation was intended to unify the regulations throughout the European Union, in particular with regard to entrepreneurs.

The standard concept of data protection impact assessment (Data Protection Impact Assessment - DPIA) ?has been widely introduced in the EU. Its aim is to assess the necessity and proportionality of data processing and to manage the risk of violation of the rights or freedoms of the citizen, resulting from the processing the personal data. The DPIA is in fact used to ensure and demonstrate compliance with the GDPR in the activities undertaken by data controllers. Failure to comply with the requirements related to the DPIA may result in the imposition of high fines by the supervisory authority. In Poland it is the President of the Personal Data Protection Office – Prezes Urz?du Ochrony Danych Osobowych which can impose fines up to a height of?.Penalties for non-compliance, breach or violation of the provisions of the GDPR are imposed in Poland by way of an administrative decision. Each case is considered individually, taking into account the circumstances of the act committed. Each decision of the UODO can be appealed to the administrative court. Just remember that there is a 14-day repayment period from the moment the judgment becomes final. The enterprise may also apply for a payment deferral or payment in installments.

Only for the last year, the total amount of all fines imposed exceeded EUR 2 billion, of which EUR 1.3 billion in Ireland. In this respect, Poland is in the middle of the ranking with over EUR 3.3 million of fines imposed. Across the European Union, the number of data breach notifications exceeded 100,000. In Poland - 13,000. [7]

According to EU common types of personal data processing include (but are not limited to) collecting, recording, organizing, structuring, storing, modifying, consulting, using, publishing, combining, erasing, and destroying data. Polish regulations listed types of processing operations subject to the requirement to carry out a data protection impact assessment. The list indicates categories of processing for which a data protection impact assessment will be mandatory, examples of operations where there may be a high risk of breach and examples of potential areas where these operations may be involved. These categories are:

-?????Evaluation or evaluation including profiling and prediction (behavioral analysis) for purposes causing negative legal, physical, financial or other inconvenience to natural persons;

-?????Automated decision-making with legal, financial or similar significant effects;

-?????Systematic large-scale monitoring of publicly accessible places using elements of recognizing the features or properties of objects that will be in the monitored space (the category does not include monitoring systems for the purpose of analyzing incidents of law violation);

-?????Processing of special categories of personal data and concerning criminal convictions and prohibited acts (sensitive data);

-?????Large-scale data processing (which refers to the number of persons whose data is processed, the scope of processing, the period of data storage and the geographic scope of processing);

-?????Conducting comparisons, evaluations or inferences based on data obtained from various sources;

-?????Processing of data concerning persons whose assessment and services provided to them depend on entities or persons who have authorizing or evaluating powers;

-?????Innovative use or application of technological or organizational solutions;

-?????Data processing that will prevent the exercise of the right or the use of the service by the data subjects.

Most of the types of operations on personal data listed above relate directly or indirectly to marketing activities. The protection of data existing in Polish law under the Act on Competition and Consumer Protection of 2007 ( Ustawa z dn. 16 lutego 2007 r.?o ochronie konkurencji i konsumentów )?and?Electronic Communications Law of?2022 ( Ustawa z dn. 15 listopada 2022 o komunikacji elektronicznej) provide formal protection against abuses resulting from the use of potentially huge analytical capabilities of AI. The data protection regulations are supported to some extent by the Copyright Law of 1994 ( Ustawa z dn. 4 lutego 1994 r. o prawie autorskim i prawach pokrewnych) regarding image protection, but as a whole it does not provide particularly strong guarantees against the violation of privacy through the operation of intelligent machines that "read" people's image or behavior according to their own criteria and consistently perform activities that are in their considered necessary, e.g. to prevent danger, protect health or maintain employee discipline.

In this context, the issue of the user's consent to the use of his data becomes important. This consent cannot be default. This is particularly evident in telecommunications law. End devices are in practice, all telecommunications devices intended to be connected directly or indirectly to network termination points such as mobile phones, landlines, computers or tablets, but also smart home appliances or cars connected to navigation or identification systems supported by AI. Consent required pursuant to the Art. 174 of the Polish Telecommunications Law cannot be implied or implied from a declaration of intent with a different content, it must be unambiguous and the person issuing the consent must be aware of what the consent means at the time of its issuance. Consent may be given electronically if it is recorded and confirmed by the issuer. Consent may be withdrawn at any time, simply and free of charge. The provisions of the telecommunications law apply to both consumer trade (B2C) and bilateral professional trade (B2B). However, only consumer trade applies to art. 10 of the Act on the provision of electronic services regarding the sending of unsolicited commercial information addressed to a designated recipient who is a natural person, by means of electronic communication, in particular electronic mail. Violation of the prohibition is an act of unfair competition.

The law in Poland and in also other countries tries to regulate two more issues related to the use of AI that are particularly dangerous. However, the results of existing regulation in these areas seem rather insufficient. One of the issue is the possibility of unlawful use of the person profiling technique and AI made profile, and the other is criminal and civil liability for damage caused by AI. Each of these issues deserves a separate attention.

3.????Profiling and surveillance systems and automatic decisions

The first issue concern not only business but also social and political security and personal safety. The definition of profiling can be found in art. 4 point 4 GDPR. Profiling is the automated processing of personal data used to assess personal characteristics in a broad sense, i.e. economic situation, health, views, preferences, interests, etc. What is important here is the method of data processing (without human participation - automatically) and the purpose of processing (evaluation of personal factors ). ?All prediction techniques regarding the identification of a specific web user, based on the history of his online activity, can be considered as profiling. Building consumer profiles and forecasting consumer decisions may be limited at the request of the persons concerned. In profiling, GDPR giving the right to immediately rectify the processing of output data and data obtained as a result of analyzes (e.g. qualifying the client to a specific category). The only way to oppose a request for rectification or objection is to demonstrate that the implementation of other legitimate grounds for processing is more important than the interests of the requestor. However, even this option will be unavailable to entities using automated data processing for direct marketing purposes. In this case, compliance with the request of the data subject is as the rule ?unconditional. Where data for profiling purposes are collected, documents such as "privacy policies" are used, informing persons whose data may be collected about their rights and obligations. It should be expected that the issues of protecting the privacy of people who have made their data available online will cause disputes and serious consequences for entrepreneurs guilty of infringements. However, the problem is not only profiling for marketing purposes. The main concern is profiling for administrative and political purposes. Such systems of collecting knowledge about citizens have existed for a long time and are used for various purposes. For example, an IT system was already in use in Norway in the 1970s to decide whether a given unit is entitled to support in the form of an allowance housing. The system used the identification number entered by the person concerned to connect to other databases and based on the information available in them made decisions as to whether a person should be granted an allowance. Nowadays the terrorist threat and authoritarian aspirations of many governments have meant that profiling based on AI models combined with automatic issuing of administrative decisions can pose a real threat on a large scale not only to privacy but also for fundamental civil rights. We know that certain AI algorithms, when exploited for predicting criminal recidivism, can display gender and racial bias, demonstrating different recidivism prediction probability for women vs men or for nationals vs foreigners and?some AI programs for facial analysis display gender and racial bias, demonstrating low errors for determining the gender of lighter-skinned men but high errors in determining gender for darker-skinned women.[8] At this point, we usually refer as an example to the Chinese social credit’s scheme ( 社会信用体系 - shèhuì xìnyòng tǐxì) a system of social surveillance based on scoring and profiling of a citizen on the basis of his or her negative and positive behaviors, each time converted into points according to a scale established by the authorities.?AI has progressed to compete with the best of the human brain in many areas, often with stunning accuracy, quality, and speed. But it seems that AI fails in capturing or responding to intangible human factors that go into real-life decision-making - the ethical, moral, and other human considerations that guide the course of business, life, and society at large.[9] But for how long?

Nevertheless, it should be assumed that the automated issuance of decisions and court rulings in many areas is a matter of the near future. The authors of the article with the provocative title “Governance of the AI, by the AI, and for the AI” are right – AI is increasingly governing humans.[10]

4.????Liability

From the legal point of view, one of the basic issues connected with the use of each new technical device is the issue of legal liability for damage caused by its actions. Polish law, based on the principles of civil law in continental Europe, solved this issue on the basis of constructions derived from Roman law. There are three rules of tort liability in Polish civil law:

-?????the principle of fault - based on it, we are liable for our own and others' acts, both of natural and legal persons (Articles 415-416 of the Civil Code);

-?????the principle of risk - on which liability for third party acts is based in some qualified cases (Article 430 of the Civil Code);

-?????the principle of equity (of an auxiliary nature), which allows the compensation of damage for social reasons by an entity not at fault (cf. Art. 417 2 of the Civil Code).

The second type of liability for damages is a contractual liability which arises as a result of non-performance or improper performance of contractual obligations. In the case of contractual liability, the creditor seeking damages does not have to prove the fault of the debtor. However, he should prove that he suffered damage due to non-performance or improper performance of the contract. The debtor is also liable for damage caused by undetermined causes. It is also on him to prove that he is not to blame for non-performance or improper performance of the obligation.

Both types of liability are based on the construction of legal capacity and refer to the human person who bears this responsibility. In fault-based liability, there must also be a causal link between the damage and the person responsible for the damage. At first glance, we can see that the removal of a human being from decision-making processes can have unpredictable consequences in terms of accountability.

At this point, I omit the issue of criminal liability, which is exclusively related to the human person. The times of the Middle Ages when animals were sentenced in criminal trials are over. The exception in the form of corporate criminal liability is a false exception, because here too we are dealing with penalties imposed on a group of people organized in a special formalized way. Of course, in criminal law we know the concept of liability for the operation of machines and devices under our supervision, but this concept is based on the concept of negligence of duty. What happens if no negligence can be proven and the harmful action depends solely on the machine's decisions based on an algorithm it has created itself?

The answers to the questions that arise are extremely difficult, but we will have to find them. It can be seen that it will be difficult to apply the existing constructions of liability to situations where the material damage, copyright’s infringement or infringement of a human's personal rights or a crime arise arises as a result of the conscious action of a non-human intellect.

The European Parliament addressed such issues in 2017 by adopting a resolution on Civil Law Rules on Robotics in which the attribution?of “electronic personhood” to most advanced robots was recommended. This concept met with heavy criticism and was finally dropped. Work on AI civil liability issues continued in the European Parliament and the Commission and resulted in 2020 Report On The Safety And Liability Implications Of Artificial Intelligence, The Internet Of Things And Robotics[11] . As a result of several years of work[12] the European Commission on 28 September 2022 proposed a Directive on non-contractual civil liability rules for AI (AI Liability Directive), complementing the AI Act proposed in April 2021. The proposal was accompanied by proposed revisions to the existing Product Liability Directive (85/374/EEC). “The new rules intend to ensure that persons harmed by AI systems enjoy the same level of protection as persons harmed by other technologies in the EU. The AI liability directive would create a rebuttable 'presumption of causality', to ease the burden of proof for victims to establish damage caused by an AI system. It would furthermore give national courts the power to order disclosure of evidence about high-risk AI systems suspected of having caused damage “[13] ?If enacted and taken together these acts may influence international and member countries regulations but many of the adopted solutions still raise questions from both lawyers and stakeholders. [14]

In US the President in 2020 signed Executive Order 13960 on Promoting the Use of Trustworthy AI in the Federal Government, which establishes guidance for Federal agency adoption of AI.[15] Together with the Advancing American AI Act of April 21, 2021[16] regulations of AI in US was started. This is very important because the United States, being the country where the main corporations dealing with AI research have their headquarters, may have a key role in establishing global rules for the operation of this technology and responsibility for its operation, and their developed legal system, separate from the European one, has in many sectors of the economy key global importance. As regards AI liability products liability is the area of law that addresses remedies for injuries or property damage arising from AI functioning.[17]

5.????Legal problems of the near future

In summing up the considerations contained in this paper, I would like to outline a few more key issues that lawyers around the world will have to solve quickly as a result of the development of AI.

Geoffrey Hinton and other scientists compared the development of AI to the development of nuclear weapon, recognizing that new technologies are a similar threat to humanity.?Their regulations require international cooperation on the scale that nuclear non-proliferation treaties once required. He stressed the necessity to cooperation of two main players USA and China on this matter. So far, such international treaties do not exist and in the current tense situation in the world it is difficult to predict if and when they will be created. What's more, international tensions mean that the call for a temporary cessation of work on AI made by many scientists and businessmen was considered rather evidence of their naivety and does not seem to have any practical significance.

Concerns about the threat to human rights posed by the development of AI are loudly articulated. It is difficult to say what impact these fears will have on the practical development of human rights institutions around the world. In her recent Chatham House paper ?K. Jones shows the human rights implications of using AI.?[18] The author strongly emphasizes the role of ethics and the values related to universal human rights in the design of AI technology. The proposal from the report to include the UN in the international discussion on the consequences of implementing AI for human rights certainly deserves support and could add a new dimension to this discussion. However, the declining role of the UN in the world order must be taken into account. This report shows also, that for most AI creators, human rights are not necessary the main framework that defines their actions, and the new regulations sometimes do not pay due attention to these rights.

The regulation of liability issues is also far from a practical solution even in the EU as we have shown above. The problem that will require research and probably regulation in this regard will be in particular the issue of the relationship between machine learning and copyright law starting from a status of generated content. The copyright law was created to protect the human creativity. In most legal systems the basic criteria for an element to be considered a copyrighted work is that it has to be the product of human creation.

It can be said that each new device or class of devices will create new legal problems related to its operation in the human environment, so it can be predicted that legal issues arising from the use of intelligent machines (IoT) and robots will require more and more involvement of lawyers.

And finally, the lawyers themselves. As some say, legal professions are on the list of professions that will disappear due to the development of IT. So far it seems to me the true view is that chat is good as a tool for professional but for the laymen it could only confirm his wrong belief.[19] One thing is certain - lawyers in the near future will have to define the whole concept of a human-centric legal system differently. We will probably have to establish new rules of cooperation between man and machine. It is much more than the use of intelligent technology in office work. [20] This essentially requires a new philosophy of law.

?

?

?


[1] What is generative AI? McKinsey Explainer, https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai , access 12.05.2023 ; “Technology with the ability to perform tasks that would otherwise require human intelligence and which, usually, have the capacity to learn or adapt to new experiences or stimuli, including machine learning, speech and natural language processing, robotics and autonomous systems” ;?https://www.lexisnexis.co.uk/legal/glossary/artificial-intelligence ; access 13.05.2023

[2] “Godfather of AI” Geoffrey Hinton Warns of “ Existential Threat” of AI/Amanpour and Company https://www.youtube.com/watch?v=Y6Sgp7y178k access 12.05.2023,

Possible End of Humanity from AI? Geoffrey Hinton at MIT Technology Review’s EmTech?Digital https://www.youtube.com/watch?v=sitHS6UDMJc access 12.05.2023

Max Tegmark interview. Six month to save humanity from AI/ DW Business Special https://www.youtube.com/watch?v=ewvpaXOQJoU access 12.05.2023

[3] E.g?in US: Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem An Implementation Plan for a National Artificial Intelligence Research Resource; in China: The Governance Principles for a New Generation of Artificial Intelligence: Develop Responsible Artificial Intelligence ; in UK: GB Algorithmic Transparency Standard ; in Poland: Policy for the Development of Artificial Intelligence in Poland from 2020 ; UN: Principles for the ethical use of artificial intelligence in the United Nations system; OECD: Recommendation of the Council on OECD Legal Instruments Artificial Intelligence; and many others

[4] ?The Economist, The EU wants to become?a World’s super-regulator in AI , 24.04. https://www.economist.com/europe/2021/04/24/the-eu-wants-to-become-the-worlds-super-regulator-in-ai

access 13.05.2023

[5] I. Ulnicane, Artificial Intelligence in European Union. Politics, Ethics, and Regulations, The Routledge Handbook

of European Integrations, 14, Edited by Thomas Hoerber, Gabriel Weber and Ignazio Cabras, 2022

[6] AI Watch. National Strategies on AI. A European Perspective. 2021 Edition – a JRC OECD Report,

[7] https://www.gazetaprawna.pl/firma-i-prawo/artykuly/8664379,cala-wladza-w-rece-uodo-rodo-rozporzadzenie-o-ochronie-danych-osobowych.html , access 12.05.2023

[8] White Paper On Artificial Intelligence - A European approach to excellence and trust; p.10 - 11 https://commission.europa.eu/system/files/2020-02/commission-white-paper-artificial-intelligence-feb2020_en.pdf , access 13.05.2023

[9] ?https://hbr.org/2022/09/ai-isnt-ready-to-make-unsupervised-decisions

[10] A. W. Torrance, B. Tomlinson, Governance of the AI, by the AI, and for the AI ; https://arxiv.org/ftp/arxiv/papers/2305/2305.03719.pdf

[11] Report From The Commission To The European Parliament, The Council And The European Economic And Social Committee - Report On The Safety And Liability Implications Of Artificial Intelligence, The Internet Of Things And Robotics;?https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52020DC0064 ; access 13.05.2023;

[12] These works in the year 2017-2022 are described in the report: C. Wenderhorst ; AI Liability in Europe -Anticipating the EU AI Liability Directive; Ada Lovelace Institute, September 2022

[13] Artificial intelligence liability directive. Briefing; 10-02-2023; https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2023)739342 ; access 13.05.2023

?

[14] See e. g.: P. Hacker; The European AI Liability Directives—Critique of a Half-Hearted Approach and Lessons for the Future; https://blogs.law.ox.ac.uk/oblb/blog-post/2023/03/european-ai-liability-directives-critique-half-hearted-approach-and-lessons ; access 13.05.2023; R. Sarel; What Should We Do About ChatGPT? ; https://blogs.law.ox.ac.uk/oblb/blog-post/2023/03/what-should-we-do-about-chatgpt ; access 13.05.2023

?

[15] https://www.federalregister.gov/documents/2020/12/08/2020-27065/promoting-the-use-of-trustworthy-artificial-intelligence-in-the-federal-government access 13.05.2023

?

[16] https://www.congress.gov/bill/117th-congress/senate-bill/1353/text

?

[17] J. Villasenor ; Products liability law as a way to address AI harms ; https://www.brookings.edu/research/products-liability-law-as-a-way-to-address-ai-harms/ ;

R.E.Long ; Artificial intelligence liability: the rules are changing ; https://blogs.lse.ac.uk/businessreview/2021/08/16/artificial-intelligence-liability-the-rules-are-changing/ ;

[18] K. Jones, AI governance and human rights. Resetting the relationship. January 2023 ;

Centre for Governance of AI ; ?https://www.governance.ai/ ;

[19] C.Criddle, Financial Times, Law firms embrace the efficiencies of artificial intelligence https://www.ft.com/content/9b1b1c5d-f382-484f-961a-b45ae0526675 access 04.05.2023 ;

E. Mulvaney, L. Weber, End of the Billable Hour? Law Firms Get On Board With Artificial Intelligence, Wall Street Journal, 11.05.2023,??https://www.wsj.com/articles/end-of-the-billable-hour-law-firms-get-on-board-with-artificial-intelligence-17ebd3f8 ; access 13.05.2023

[20] J. Lanier, G. Weyl ; AI is an Ideology, Not a Technology ; Wired ; 15.03.2023 ; https://www-wired-com.cdn.ampproject.org/c/s/www.wired.com/story/opinion-ai-is-an-ideology-not-a-technology/amp ; access 14.05.2023



要查看或添加评论,请登录

社区洞察

其他会员也浏览了