Governance frameworks are critical when it comes to deploying AI
AI is advancing rapidly. More businesses are incorporating it into operations. But how much is hype?
We’ve taken the challenge to see if you can distinguish between blogs written by generative AI and blogs written by me. Is it really as good as people say? Read the blog below and?cast your vote below as you decide, Did AI Write This?!?
(blog 2/4?–?first one here)
Artificial intelligence has already?been transforming the lives of Australians and the organisations we work for. ?But ChatGPT and the emergence of other AI chatbots like Microsoft Bing (powered by GPT4) and Google Bard is starting to have a significant impact on our broader society and the way we think about innovation and creativity.
The transformation is highlighted by a recent report from the CSIRO called The AI Ecosystem Report. In the report, Australia’s flagship scientific organisation found that over the next 12 months, Australian organisations will be focused on discovering new ways of differentiating themselves in the market, improving agility
How do Australian businesses intend to hit these goals? It’s all about technology, says the CSIRO, and the key tech allowing them to get where they want to go is AI.
But deploying AI, like any technology, requires an understanding of the risks, and how to avoid them. According to leading AI expert and author of the ServiceNow report Australia’s Digital Goldrush, Dr Catriona Wallace, it’s absolutely critical for organisations to create ethical and governance frameworks
领英推荐
But why should you care about ethics and governance with a technology? We have all seen what can happen when there is insufficient focus on these areas, particularly when we look at some of the unintended consequences of the use of social media platforms. With the increasing use of AI, it’s critical that teams are taught about ethics or trained how to assess the ethical quality of software. There is the potential for harms associated with introducing AI technology in the absence of a thorough review of the risks and how to avoid these. And, when done wrong, this could have a significant negative effect on your organisation’s reputation, employee wellbeing and the wellbeing of your customers.
Large language models (LLMs) that underpin the likes of ChatGPT are trained on massive amounts of data. However, it has not always been clear where some of this data comes from, whether these LLMs are adequately representative of the people it provides information about, or whether the data should really have been used by the models. ?For example, if the data set they are trained on is biased or incomplete, then they could provide responses that are inappropriate or harmful, which could lead to serious consequences for an organisation. Those answers could also have an impact on staff or customers, as this can lead to mistrust and other ethical concerns. ?
One of the areas that ServiceNow is focused on is to ensure that the data it uses in developing LLM’s has the appropriate permissions from the owners of that data. We are supporting a research project with Hugging Face, a leader in this field. The BigCode project aims to ensure that data collected to teach models how to write software code comes from sources that have granted permissions. It is also screened to make sure that it is free from malicious code, to avoid teaching models how to write dangerous software. ?
This lack of clarity around the sources of data is one of the reasons that ServiceNow has partnered with NVIDIA to develop powerful, enterprise?grade generative AI capabilities for use within organisations, using carefully curated data. This will enable the benefits of AI to extend beyond Chat. The partnership will enable ServiceNow to transform business processes with faster, more intelligent workflow automation using custom large language models trained on data specifically for the ServiceNow Platform.
As Dr Wallace says, successful innovation
Regulators around the world are currently rushing to figure out how they should regulate this technology. When they do, ?business leaders must not only make sure their AI systems meet the regulations but also, more broadly, meet the expectations of the staff and customers they are designed to serve.
AI is already transformative. It’s only going to get more so. Forward thinking organisations must consider the ethical and governance frameworks of their AI deployments. Reputation, as well as customer wellbeing, is at stake.