Ethics and Decision-Making in the Use of Generative AI

Ethics and Decision-Making in the Use of Generative AI

Introduction

How we met and started our conversation on AI as it was affecting both of our work areas. ?

JR: As an educator and administrator in Higher Education, I constantly look for ways to improve how we do our work. I look at Generative AI as an extension of this effort, the latest incarnation of automation to increase our productivity. I am also at the forefront of helping our students become independent learners and thought leaders in their own fields of interest. Once again, Generative AI could substantially influence how we approach learning. These motivations of mine led to striking, intriguing conversations with Valerie, whose expertise lies in career coaching and workforce development.

Valerie: When we started discussing Generative AI, I thought about how it would affect the workforce. There are so many positions that will be affected negatively and some that will be positively affected. Even more compelling will be how it affects majors and research in higher education as they prepare a large swath of folks for the workforce. What meaning do work and education have in the future? Coming from a place of being cautiously optimistic, I wanted to approach it from a realistic and ethical perspective.

As we harness AI's potential in our daily lives, it is crucial to acknowledge its power to bridge or increase existing disparities. The careful deployment of AI systems can serve as a tool to mitigate biases and narrow inequality gaps. It is critical to think through an ethical framework for leadership decisions.

With AI's ability to analyze vast amounts of personal data, questions about privacy arise.

We must establish stringent protocols to safeguard individuals' privacy rights, a crucial step in making our audience feel secure and valued. Transparency in data collection, processing, and storage is paramount to maintaining trust and protecting our most valuable asset—personal information.

JR: To make its outcome more useful and customized, we need to feed a substantial amount of personal data to Generative AI solutions. An image generation tool like Midjourney (https://www.mymidjourney.ai/) consumes raw image files to produce desired effects on them. We don’t know where our files end up. This scenario presents a much higher stake than providing a few text prompts. What if a malicious third party obtains access to my headshot and uses it to produce a fake ID? Would the Generative AI service provider be responsible for the mishap?

J.R. as an Astronaut (Photo Credit: Midjourney)

Valerie: During our discussion, the use of generative AI to create headshots for LinkedIn profiles was a topic of interest. This raised two ethical concerns for me. Firstly, the usage of these images needs to be clearly defined to ensure privacy and fair use of the uploads. Secondly, many of these images are the work of professional photographers, so it's important to establish how they are being credited. Also, with many customized platforms built off of large open-source platforms, where does one attribute someone's likeness?

The inner workings of AI algorithms can often be complex and opaque, making it difficult to understand how decisions are made.

This lack of transparency can erode trust and hinder accountability. By promoting openness in AI development and deployment, we can create systems that are explainable, interpretable, and accountable.

JR: AI algorithms have inherent opaqueness. Their creators often don’t even know how AI arrived at an answer because of the black-box nature of AI-based decision-making. This limitation is troubling because of the mission-critical scenarios where AI takes over and creates a life-or-death situation, as in autonomous driving. By now, you must have heard horror stories about cars driving themselves into life-threatening situations like rail crossings. To address this troubling aspect of AI, scholars started working on eXplainable AI (XAI), focusing on AI algorithms that are more transparent. As a computer scientist, I know for a fact that it’s impossible to prove the safety of a software application, and AI is no exception.

Valerie: A troubling revelation came to light in a recent research study. Recruiting platforms that employed generative AI to identify candidates exhibited a negative bias towards older applicants. The most concerning part was the lack of a clear explanation for this discrimination, raising serious ethical questions about using such technology. If you are not checking for bias and are unaware of the pattern of decision, how will the platform be fixed?

Preventing Disparities: We need a multi-faceted approach to prevent unchecked AI from perpetuating existing disparities.

It starts with diverse and inclusive teams driving AI development, ensuring a broad range of perspectives are considered. Rigorous testing and ongoing monitoring can help identify and address biases and inequities. Collaboration between policymakers, industry leaders, and advocacy groups is vital to establishing clear regulations and ethical guidelines for AI applications.

JR: Just like any other software systems, Generative AI applications require rigorous testing before being released for wider use. However, the software industry seems to be launching a solution and fixing problems as they go. As a software engineering researcher, I have seen this phenomenon occurring numerous times in our industry. If the software companies cannot police themselves to prevent disparities manifested in AI, we as users should be more vigilant about biases emerging in these Generative AI programs. Otherwise, the situation will perpetuate itself or become even worse.

Valerie: Gender equity jumped immediately to my mind. This bias is systemic, often subtle, and pervasive in many online writings. For example, I tested one coaching platform for bias. I asked the same question in three ways to learn about conflict management. First, I asked for advice on conflict management in general. The platform provided me with conflict advice and recommended a conflict management course. On the second request, I gave my identity as a female with a male supervisor needing help with conflict management. The platform gave me the advice I needed to be "strategic" and recommended a course explaining that I was overwhelmed and not incompetent. I changed the third request to a female supervisor. In this example, the advice was to be more "professional" and recommended the course to explain that I was overwhelmed and not incompetent. In each example, the wording was subtle in making suggestions.? For instance, when talking to a woman, you need to be professional, but would I not be? When talking to a man, I need to be strategic – why not professional?? The recommendation was not so subtle that I was overwhelmed and not incompetent. The platform only knew that I was a woman.? I could have been dealing with sexual harassment, a critical negotiation, a friendship gone awry – so many reasons; however, the platform jumped to incompetent.

Valerie Sutton

Leader in Career and Coaching Theory, LinkedIn Learning Instructor

8 个月

Thank you Jungwoo Ryoo for a fascinating conversation! I'm keen to continue our collaboration on this topic.

E.J. Knittel

Vice-President, Board Trustee at Insight PA Cyber Charter School

8 个月

Very interesting and well thought out. I will be recommending this to others to read.

要查看或添加评论,请登录

Jungwoo Ryoo的更多文章

  • My Toastmasters Journey

    My Toastmasters Journey

    I have come a long way as a speaker who aspires to motivate and inspire people. My public speaking journey began when…

    8 条评论
  • Career Insight from an Amazon.com Machine Learning Scientist

    Career Insight from an Amazon.com Machine Learning Scientist

    I recently invited Dr. Hyokun Yun, machine learning scientist at Amazon.

    2 条评论
  • ICSSA 2020 Wrap-up

    ICSSA 2020 Wrap-up

    When the Corona pandemic started in March this year, the organizers of the International Conference on Software…

  • Compassion is the only way to success!

    Compassion is the only way to success!

    Compassion is in high demand these days more than any other time. My working definition of compassion for this article…

    6 条评论
  • Job Search 101: Employers Perspective

    Job Search 101: Employers Perspective

    During our recent business advisory council meeting, we discussed what employers look for throughout their hiring…

    1 条评论
  • Third Data Science Educators Workshop

    Third Data Science Educators Workshop

    I am the Principal Investigator (PI) of the National Science Foundation project: “Building a Big Data Analytics…

    2 条评论
  • ABET Renews Its Accreditation for Penn State Altoona EMET.

    ABET Renews Its Accreditation for Penn State Altoona EMET.

    We just received wonderful news from ABET (Accreditation Board for Engineering and Technology)! They renewed their…

  • PA House Transportation Committee Visits Penn State RTE Program

    PA House Transportation Committee Visits Penn State RTE Program

    Earlier this week I had a chance to witness our government in action. On August 28th, the Altoona Railroaders museum…

  • It's OK to Be a Chameleon!

    It's OK to Be a Chameleon!

    Effective communication is a critical skill for leaders. As of this writing, I am working on my Toastmasters Dynamic…

    4 条评论
  • Data Science Career Clinic: Identifying the Right Data Science Learning Resources for You

    Data Science Career Clinic: Identifying the Right Data Science Learning Resources for You

    One of the most popular questions I receive from my LinkedIn Learning course viewers is where to get started and what…

    8 条评论

社区洞察

其他会员也浏览了