AI and Ethical Guidelines in Research Practice
By Lisa Salas and Rebecca Dunlop

AI and Ethical Guidelines in Research Practice

Artificial Intelligence (AI) has become ubiquitous in market research and insights, and there has been a growing concern over the need for ethical guidelines to help researchers navigate its use in their work. The Research Society Code of Professional Behaviour was created to help researchers navigate their work ethically, and is applicable to the use of AI tools. This article explores how The Code can be applied to the use of AI in market research and insights.

There has been increasing concern over the use of artificial intelligence (AI) in market and social research, and the need for clear guidelines to ensure it is used ethically. AI is becoming ubiquitous in the research, insights, CX and analytics industries. The concerns around ensuring it is used ethically are valid, as it has the potential (like any research practice) to cause harm or mislead respondents and buyers. AI is constantly evolving, and its use in research poses many new questions for one to consider. This article will explore how applying The Research Society Code of Professional Behaviour serves as a compass for researchers navigating the use of AI in their work.

The Research Society Code of Professional Behaviour as a framework

The Research Society Code of Professional Behaviour serves as a foundational guide for members in all facets of their professional conduct, and provides the public with confidence that their best interests are paramount. The Code was designed to be applicable to the use of all research methodologies. The introduction of AI in research practice is reminiscent of the integration of the Internet, mobile and social media in research. While The Code does not take precedence over applicable law, it outlines a set of ethical principles and standards that professionals should adhere to. These principles include values such as honesty, transparency, integrity, respect for others and a commitment to the well-being of respondents. They act as a guide for decision making and the application of duty of care to protect respondents and buyers.?

Transparency?

There are many ethical themes throughout The Code that are relevant to the use of AI systems in research. Honesty and integrity are central themes that govern all professional behaviour and decision making. One should be honest about the use of AI in a research project, including the capabilities and limitations of the AI system being deployed. If possible, one should be transparent about the AI decision-making processes applied in data analysis. One should not misrepresent, or overinflate, AI's abilities to avoid setting unrealistic expectations with buyers. Additionally, members should be confident that the integrity of AI systems, including security and confidentiality, are resistant to manipulation and fraud.?

Privacy and data protection?

Data storage and privacy protection are essential when handling any identifiable, personal information. As with all research practices, strict and transparent protocols must in place to ensure respondent and client personal information is protected. For instance, respondent Internet Protocol (IP) addresses must be removed as a standard practice to minimise the risk of a security breach.?

Duty of care

Members are bound by a duty of care to ensure that AI systems are developed and used in ways that do not harm members of the public and should take special care when researching people in vulnerable circumstances. They should advocate for responsible AI practices and report unethical behaviour within the field.

Professional development

The need for professional development underpins professional conduct and is heightened with emerging technologies, such as AI. Members should strive for excellence in their field and continuously update their skills. This is especially crucial in the rapidly evolving AI landscape to keep abreast of the evolving best practices and emerging ethical issues.

Equality and inclusion

AI systems should be developed with equality and diversity in mind and represented in their design. Participants should not be discriminated against directly or indirectly due to gender, race or any other orientation or characteristic. Researchers should do their best to be self-reflective and aware of their biases to minimise their impact in their work.

Conclusion

The inclusion of AI tools in research raises many ethical questions and will continue to do so for quite some time. Members need to be aware of their responsibility to protect and respect the best interests of respondents and buyers in all areas of their professional conduct, and the use of AI is no exception. The Code offers members with a set of principles that govern their professional conduct and decision making, and should be used when designing, developing, and implementing AI tools in their work.

The Research Society Code of Professional Behaviour

Further reading on AI and ethics in research: AI or Artificial Banditry? Exploring the Ethics of Intelligent Algorithms


要查看或添加评论,请登录

The Research Society的更多文章

社区洞察

其他会员也浏览了