The Irresponsible use of GPT

The Irresponsible use of GPT

My thesis is simple: We’re all using GPT irresponsibly according to Australia’s AI Ethics Framework.

Australia has 8 AI Ethics Principles adopted nationally November 2019:

  • Human, societal and environmental wellbeing: GPT should benefit ??individuals, ?? society and the ?environment.
  • Human-centred values: AI systems should respect ?? human rights, ?diversity, and the ?? autonomy of individuals.
  • Fairness: AI systems should be ??inclusive and ? accessible, and should not involve or result in unfair discrimination against ?individuals, ?communities or ?groups.
  • Privacy protection and security: AI systems should respect and uphold ?privacy rights and ?data protection, and ensure the ?security of data.
  • Reliability and safety: AI systems should ?? reliably operate in accordance with their ?intended purpose.
  • Transparency and explainability: There should be ?transparency and ?responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.
  • Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a ?timely process to allow people to challenge the use or outcomes of the AI system.
  • Accountability: People responsible for the different phases of the AI system lifecycle should be ?identifiable and ?accountable for the outcomes of the AI systems, and ??human oversight of AI systems should be enabled.

At the very least GPT needs regular:?

  1. Human rights impact assessments, e.g as per Ed Santow Human Rights and Technology Final Report (2021), and
  2. Human factors assessments, eg using NSW WHS AI Scorecard?

However, Australians will use GPT because it because it is accessible and they can see the benefits for themselves and their communities.?

Australians will use GPT and continually evaluate it. Trust and reliability will be negotiated in the relationship between users in experimentation and usage over time.

Anyone can do anything they want with GPT--there is no safety or intended purpose. There will be no responsible disclosure, transparency, privacy rights, data protection, or data security by OpenAI. No one at OpenAI will be identified or held accountable. There will be no timely process to contest AI outputs. Because the dataset is vastly biased, there will always be unfair discrimination of marginalised and disadvantaged individuals, communities and groups who are not in the dominant existing hedgemon. It will always be terrible for the environment.

So, are Australians using AI responsibly when they use GPT? If you think ‘yes’, then do you disagree with the Australian AI Ethics principles?

Or if you agree with the principles and think that you can use GPT responsibly. Is it because you think that some principles are more important than others? Or do you think it’s ok to skip some principles? Or are you putting the burden of responsibility back on OpenAI and the government? Perhaps you believe that the regulators should control it or limit it? But, Australia has no AI regulator.

What should citizens do when the government cannot regulate technology to improve or ensure their safety? Is it responsible to use GPT? Will you keep using it anyway?

Oscar Oviedo-Trespalacios

Misuse of Technology & Human Factors Researcher | Responsible Risk Management Professor | Keynote Speaker | Editor | Board Member | Expert Court Witness

1 年

https://dx.doi.org/10.2139/ssrn.4346827 some examples here

Elliot Duff

Independent Robotics Research Consultant

1 年

Thanks S. Kate Devitt for opening this can of worms. I mostly agree with what you've said. While GPT can be helpful, in many cases, the responses are statistical garbage and may contain biases without any means of interrogating or correcting them. It will likely require some time to fully consider the ethical implications of these tools, and as has been noted, the framework may be aspirational. However, if that is the case, what steps can be taken to enhance the situation?" My main concern is our misconcenption that Internet is the sum of all human knowledge - when clearly it is not - it is mostly filled with marketing hyp, politucal misinformation, conspiracies and social fluff. Social media and search does not make a knowledge management system. So how can we go forward - do we need to actively create data sets that are ethical?

Brian Lovell

Professor of AI at The University of Queensland

1 年

Also everyone who uses face recognition on their iPhone is in breach of the Australian AI Ethical Framework.

Brian Lovell

Professor of AI at The University of Queensland

1 年

ChatGPT is based on huge amounts of data gathered without consent. This is clearly a breach of Australia’s AI Ethics Framework. However I am betting on ChatGPT becoming all pervasive. What does that tell us about the Framework?

Michael Milford

Director of the QUT Centre for Robotics, ARC Laureate Fellow, ATSE Fellow. Positioning Systems for Robots and Autonomous Vehicles. Expert Speaker & Advisor on Autonomous Vehicles, AI & Robotics. Educational entrepreneur.

1 年

Is not almost everything we do, use, buy and consume on a daily basis in violation of some or all of these principles? This doesn't absolve the responsibility but is, on a pragmatic level, something that makes it very hard to get people to follow a set of ethical standards with respect to a new "thing" when they already violate them on a near continuous basis in their typical everyday activities.

要查看或添加评论,请登录

Dr S. Kate Conroy的更多文章

  • Who is doing the work to constrain the use of generative AI?

    Who is doing the work to constrain the use of generative AI?

    The Harvard Law School Forum on Corporate Governance has written a post, Generative Artificial Intelligence and…

    3 条评论
  • The Metaphysics of Military Robots

    The Metaphysics of Military Robots

    A group of prominent robotics companies have recently pledged that their general purpose robots will not be weaponised,…

    4 条评论
  • Solitude & Covid-19

    Solitude & Covid-19

    Social distancing, or 'physical-distancing' as some prefer, challenges our notions of what human beings are, what they…

    2 条评论
  • Principles of Good Data

    Principles of Good Data

    Article based on presentation at the Association of Internet Researchers Conference 2019. See slides .

  • What methodologies will help us navigate large scale ethical reasoning?

    What methodologies will help us navigate large scale ethical reasoning?

    This year an absolutely massive experimental philosophical study was done called The Moral Machine Experiment. Almost…

    6 条评论
  • Cognitive factors that affect the adoption of autonomous agriculture

    Cognitive factors that affect the adoption of autonomous agriculture

    Dr S.Kate Devitt, Queensland University of Technology Article published in Farm Policy Journal Vol.

    2 条评论
  • Can digital farming entice urban flight?

    Can digital farming entice urban flight?

    If farming is digitised, transport autonomised and health teleoperated, will more city slickers move to the bush?…

    12 条评论

社区洞察

其他会员也浏览了