Artificial Intelligence in the Workplace: An Opportunity or a Threat to Employee Privacy?
Michele Tamburrelli
appassionato di #relazionisindacali, #dirittodellavoro, #formazione e #HR
written by Michele Tamburrelli*
Recently, the Italian Data Protection Authority imposed a fine of €80,000 on a company that, in violation of privacy regulations, had kept former employees' corporate email accounts active. This prolonged access allowed the company to view private correspondence without adequate information or privacy safeguards. This case highlights the risks associated with managing employees' personal data, especially in a context where technologies like Artificial Intelligence (AI) are rapidly evolving.
The General Data Protection Regulation (GDPR) establishes strict rules for data processing, based on principles of lawfulness, fairness, transparency, data minimization, purpose limitation, and storage limitation. In other words, companies must collect data only for clear, explicit, and legitimate purposes, avoiding retention beyond what is necessary. However, as demonstrated by the case sanctioned by the Authority, many employers seem unaware of the legal implications related to the use of employee data.
Additionally, Article 4 of the Workers' Statute (one of the most important Italian labour law), amended by the Jobs Act, regulates the use of remote monitoring tools on workers. The legislation allows employers to adopt software and hardware for organizational, productive, or security needs without the obligation of a union agreement, provided that such tools are not intended for direct control of work activity. When, however, the use of these tools involves control, even indirect, of employees, a prior agreement with company union representatives (or, ultimately, with the labor inspectorate) is necessary. This balance aims to ensure that technology is used ethically, respecting the dignity and privacy of workers.
But when can a system be defined as a work tool and when as a control tool? This is one of the topics of debate on certain software and/or devices (for example, those for vehicle geolocation, some software for monitoring computer activities, biometric devices for access control), increasingly sophisticated, that allow the storage and processing of a large amount of data.
The arrival of Artificial Intelligence in workplaces amplifies these risks and makes the issue even more complex. AI allows data to be collected and analyzed with unprecedented precision and speed, but this represents a double-edged sword. On one hand, AI optimizes production processes; on the other, it risks leading employers to exceed, even unknowingly, the limits established by the GDPR. AI can become a "high-performance sports car" in the hands of the employer, capable of speeding beyond privacy boundaries without adequate control.
The principle of transparency, along with that of data minimization, is particularly vulnerable in the era of AI. The algorithms used by AI systems can analyze behaviors, preferences, and habits of employees, creating detailed profiles and collecting information far beyond what is necessary. In the absence of clear regulations and rigorous control, there is a risk of violating workers' right to privacy, processing data disproportionately to the original purposes.
In response, the European Union has issued a new regulation, Regulation (EU) 2024/1689, also known as the AI Act, fully applicable in our country from August 2026 (with some specifics in force from 2025), which aims to establish clear rules on the use of artificial intelligence, including in the workplace. This regulation introduces specific guidelines to mitigate risks and protect employees' rights, integrating with the GDPR to provide a comprehensive protection framework.
In light of these challenges, it is essential that those who develop and offer AI technologies assume ethical responsibility. One cannot place a "high-performance sports car" in the hands of the employer without the right instructions and without a control system. Providers of these technologies should offer support and training on how to use AI in compliance with regulations, preventing this technology from becoming an invasive surveillance tool.
The ultimate responsibility for data processing, however, remains with the employer. It might be reasonable to extend to all employers the obligation to conduct a Data Protection Impact Assessment, modeled after the risk assessment already mandatory in health and safety. This assessment, already provided for high-risk data processing according to the GDPR, would increase awareness, standardize practices, and reduce the risk of sanctions.
Finally, training plays a crucial role. It is essential that employers fully understand the limits and responsibilities related to personal data processing and that employees are aware of their rights. Only with conscious and responsible management of AI will it be possible to prevent this technology from "dangerously skidding," preserving a safe work environment that respects the privacy and rights of workers.
Knowledge Hub Specialist presso Adecco Italia
23 小时前articolo molto interessante e ben sviluppato. Grazie Michele per il tuo contributo