EC Report Highlights Key Issues with Data Protection and Connected and Automated Vehicles

EC Report Highlights Key Issues with Data Protection and Connected and Automated Vehicles

In its detailed report on the Ethics of Connected and Automated Vehicles the European Commission sets out key data protection recommendations

Definition: Connected and Automated Vehicles are vehicles that are both connected and automated and display one of the five levels of automation according to SAE International’s standard J3016, combined with the capacity to receive and/or send wireless information to improve the vehicle’s automated capabilities and enhance its contextual awareness.

General: The acquisition and processing of static and dynamic data by CAVs should safeguard basic privacy rights, should not create discrimination between users, and should happen via processes that are accessible and understandable to the subjects involved.

Key Data Protection Recommendations:

Safeguard Informational Privacy and Informed Consent.

  • CAV operations presuppose the collection and processing of great volumes and varied combinations of static and dynamic data relating to the vehicle, its users, and the surrounding environments.
  • Policymakers should set further legal safeguards and enforce the effective application of data protection legislation, notably provisions on organisational and technical safeguards, to ensure that the data of the CAV user are only ever disclosed, or forwarded, on a voluntary and informed basis.
  • Policymakers and researchers should make sure that the development of such measures is conducted and grounded in responsible innovation processes with a high-level of engagement between stakeholders and the wider public.

Enable User Choice, Seek Informed Consent Options and Develop Related Best Practice Industry Standards

  • There should be more nuanced and alternative approaches to consent-based user agreements for CAV services. The formulation of such alternative approaches should: (a) go beyond “take-it-or-leave-it” models of consent, to include agile and continuous consent options; (b) leverage competition and consumer protection law to enable consumer choice; and (c) develop industry standards that offer high protection without relying solely on consent.
  • User consent may not always be a sufficient measure to gauge a data subject's privacy rights. Thus policymakers must ensure that new industry standards around “reasonable algorithmic inferences" are established. Such best practice standards should address ethical data sharing, transparency and business practices (e.g. with insurers, advertisers or employers) and give guidance on grounds for and boundaries of legally and ethically acceptable inferential analytics (e.g. unlike inferring race or age to offer goods and services).
  • The proper functioning of such management systems should be accompanied by appropriate auditing or certification mechanisms.

Develop Measures to Foster Protection of Individuals at Group Level.

  • Develop legal guidelines that protect individuals’ rights at group levels (e.g driver, pedestrian, passenger or other drivers’ rights) and should outline strategies to resolve possible conflicts between data subjects that have claims over the same data (e.g. location data, computer vision data), or disputes between data subjects, data controllers and other parties (e.g. insurance companies).
  • Policymakers should develop new legal privacy guidelines that govern the collection, assessment and sharing of not just personal data, but also non-personal data, third party personal data, and anonymized data, if these pose a privacy risk for individuals.
  • This is important because machine learning algorithms are able to infer personal private information about people based on non-personal, anonymized data or personal data from group profiles, over which the affected party might not have data protection right. This is a new and significant privacy risk.

Develop Transparency Strategies to Inform Users and Pedestrians about Data Collection and Associated Rights.

- CAVs move through and/or near public and private spaces where non-consensual monitoring and the collection of traffic-related data and its later use for research, development or other measures can occur.

- Policymakers should work with manufacturers and deployers to develop meaningful, standardised transparency strategies to inform road users, including pedestrians, of data collection in a CAV operating area that may, directly or indirectly, cause risks to their privacy as they travel through such areas.

- This includes digital and near real-time updates for road users when approaching, entering, and leaving zones where potentially privacy intrusive data collection occurs.

- Such communication may occur through in-vehicle or wearable smart-device displays, audio-visual aids on roads (e.g. street signs, flashing icons, beeping sounds), or other minimally privacy-invasive communication modes with textual, visual, audio and/or haptic elements. This allows the communication of privacy risks and rights to a wide and diverse audience.

Reduce Opacity in Algorithmic Decisions.

  • User-centered methods and interfaces for the explainability of AI-based forms of CAV decision-making should be developed.
  • The methods and vocabulary used to explain the functioning of CAV technology should be transparent and cognitively accessible, the capabilities and purposes of CAV systems should be openly communicated, and the outcomes should be traceable.

Promote Data, Algorithmic, AI Literacy and Public Participation.

Individuals and the general public need to be adequately informed and equipped with the necessary tools to exercise their rights, such as the right to privacy, and to actively and independently scrutinize, question, refrain from using, or negotiate CAV modes of use and services.In its detailed report on the Ethics of Connected and Automated Vehicles the European Commission sets out key data protection recommendations



要查看或添加评论,请登录

Odia Kagan的更多文章

社区洞察

其他会员也浏览了