Artificial Intelligence And Litigation Risk

Artificial Intelligence And Litigation Risk

?Artificial Intelligence And Litigation Risk?

Assisted driving systems installed in Tesla Inc., Hyundai Motor Co. and Subaru Corp. vehicles failed to avoid head-on collisions in testing done by AAA, though Tesla’s Autopilot system did slow the vehicle to a walking speed before striking an oncoming, foam model of a car.

The AAA, a U.S. consumer and travel services organization, said the tests illustrate how current assisted driving and automated braking systems fall short of true autonomous driving, and require drivers to stay in control of vehicles.

AI technology raises tricky questions about the scope of potential civil liability, including for AI-driven privacy violations, discrimination, failure to spot compliance issues, accidents, and security breaches. It may also be difficult to foresee which legal or natural person(s) could, or should, be held responsible for harm caused by AI.

Challenges in determining liability

When human beings make decisions, the nexus between the decision and the decision maker is typically obvious. This makes establishing liability conceptually straightforward.

Establishing liability for AI-caused harm is potentially more complex. Three potential problems arise:

  1. The autonomy problem: If an alleged harm has been caused by an AI system that uses machine learning to make decisions without a "human in the loop", general liability principles - which are founded on agency, control, and foreseeability by natural or legal persons - may be difficult to apply. The more autonomous AI technology becomes, the harder it may be to identify the party that caused the damage.
  2. The multi-stakeholder problem: More complex AI systems involve more stakeholders in their development and deployment. If such a system has caused harm, it may be difficult to determine which stakeholder or stakeholders to hold responsible.
  3. The "black box" problem: Programmers may lack an exact understanding of how an autonomous AI system made an impugned decision. How would causation or fault be established - or disproved - if the harmful output cannot be explained or proven?

Limitations of ?current Legal framework

Existing liability principles may address cases in which an AI system's act or omission may be traced to a specific agent's design defect.

Tort law and product liability law. could, however, prove unsatisfactory for addressing potential liability in the context of advanced, autonomous AI systems.

Law and its corollary, liability, fall within the field of human action from which AI is, by nature, excluded. It will, therefore, be necessary to consider whether the applicable laws on civil liability can or must be adapted to the specificities of situations involving AI. The challenge will be to preserve the universal character of the law while promoting the emergence of AI in a legally secure space.

Bridging gaps between old laws and new technologies

AI is constantly evolving. Laws that govern AI systems and their users (both statutory and common law) will have to be versatile and continuously updated in order to remain adequately responsive.

Legal personality

One novel and controversial proposal is to confer legal rights and a corresponding set of legal duties on AI systems (similar to laws ascribing legal personhood to corporations). This would allow the AI itself to be held directly accountable for any harm it causes.

Legal personality has been granted to ships in the United Kingdom and, more recently, Saudi citizenship has been granted to a robot named Sofia.?In 2015, the Civil Code of Quebec was amended to affirm that animals are not "things" and that they enjoy biological imperative.

Strict Liability

Another possibility is developing a strict liability regime to ensure compensation from AI operators that expose third parties to an increased risk of harm (should that harm ever materialize), combined with liability insurance schemes to protect the AI operators themselves.

Most recently, the European Parliament adopted its?Resolution on a Civil Liability Regime for Artificial Intelligence, which applies strict liability to any operator of a high-risk AI system, even if they acted with due diligence in its operation.

Common enterprise liability

Common enterprise liability could be also used to hold several AI stakeholders, each of which worked towards a common objective to build the AI technology (e.g.,? manufacturers, software engineers, and software installers), jointly responsible for indemnifying a plaintiff for AI-caused harm. As common enterprise liability has only ever been applied when entities are organizationally related (e.g.,?a parent company and its subsidiary), this doctrine would have to be amended to apply to the creation of AI where this may not always be the case.

Frame applicable legal frameworks

The application of AI technology could create liability risks for businesses for which the law, as it stands, does not provide complete or satisfactory answers. Managing this risk without unduly delaying the deployment of useful and novel AI systems is, and will remain, an important challenge for businesses. To address this challenge, leaders and their advisers will need to match foresight concerning technological developments with foresight concerning the potential evolving of applicable legal frameworks.

要查看或添加评论,请登录

Meark Enterprise Private Limited的更多文章

社区洞察

其他会员也浏览了