Four Principles of Data Science Ethics
Photo by Aron Visuals on Unsplash

Four Principles of Data Science Ethics

The full article appears here: https://retina.ai/blog/four-principles-data-science-ethics/

Does your data science team worry about aligning your work with your values? Data science and machine learning provide new capabilities, and with those come new ethical responsibilities.

At Retina, we developed four principles of data science ethics to make sure we serve our clients and their customers with respect and fairness. Professional guidelines* that have been refined over decades and contemporary experts like former US Chief Data Scientist DJ Patil alike inspire and inform our principles.

1. Protect Individual and Company Privacy

Over 5 billion records of personal information were exposed through data breaches in 2019, costing businesses trillions of dollars in the resulting scandals. As people begin to take more aspects of their daily lives online, ranging from sharing what’s for lunch on social media to more sensitive information like managing bank accounts and credit cards, hackers have much more to gain by attempting to access a company’s records about its customers and prospects.

The other side of the coin is that each of these data breaches can cause personal harm—like identity theft—to real people. It’s important to have comprehensive plans to protect and secure user data, but also to understand that flaws in these plans are always a possibility. So, developing a mechanism for redress if people are ultimately harmed by the results or implementation of a model is critical and required to comply with regulations like the California Consumer Privacy Act (CCPA) and the General Data Protection Regulation (GDPR).

2. Account For and Remove Unfair Data Bias

Data breaches aren’t the only way models can cause harm—models based on biased data can make unfair decisions about people’s opportunities, livelihoods, and more. In one example, an unsupervised model for determining whether a person should qualify for insurance at a low rate or not took the applicant’s race into account.

Some unsupervised machine learning algorithms operate like a black box. So, it’s difficult to know off the bat if the resulting model discriminates based on characteristics like race or sex. We conduct research in advance to identify possible bias in the data that will serve as the basis of the model. We also test models for fairness and disparate error rates among different user groups after they’re complete as another layer of security against unfair bias.

3. Ask About The Use Cases of End Results

...

4. Ship High Quality and Accurate Models

...

TO READ MORE: https://retina.ai/blog/four-principles-data-science-ethics/


Varun Bhartia

YouTube Creator Tools - Effects, ML, GenAI

5 年

does this align with "TEthics"??

要查看或添加评论,请登录

Emad Hasan的更多文章

社区洞察

其他会员也浏览了