AI bias: why it can be good
"Zero-day Heroes" vs "Privacy Guardians"
I spent two days last week on a Wrangu 'hackathon' focused on how we can #PutAIToWork for Security, Risk & Privacy - using ServiceNow
For those of you not completely au fait with a hackathon: we were not breaking into anyone's piggy bank - and only a few of us were actually wearing hoodies. (We were, though, at times, coding in the dark!)
For avoidance of doubt: A hackathon is a time-bound competitive event, that typically ends with a Dragon's Den style pitch on the product your team has built. Oftentimes the end product will be presented with a glossy veneer, while behind the scenes in reality it is hung together with plasters and sticky tape. The scores are marked based on innovation, vision, presentation, product demo, etc
There are two key measures of a hackathon: learn & have fun.
To that extent, we succeeded beyond all doubt. More on that at a later date I'm sure, ...watch this space...
Bias, and why it isn't such a bad thing.
If you have spent any time at all reading about AI, one of the risks that is identified and talked about a lot is "bias".
AI in today's world pretty much means applying the results of some Machine Learning (ML). The computers are not coming alive, just yet. ML is based on training models that have been fed a load of data. Large Language Models (LLMs) such as those behind OpenAI's ChatGPT have been built on open source public internet data sets. They typically respond to natural language. There are rightfully concerns about copyright and privacy, galore.
The internet is full of bias. When you point a bunch of machines at skewed data and ask it to identify patterns and predict potential outputs, of course, it will return the bias it has learned. The same as any kid, in any environment, they are not to blame.
So what do we do?
领英推荐
Well, ServiceNow's Now Assist capabilities are focused on 'domain specific' LLMs. What???
Case in point: Text to code
Any script field in ServiceNow, you can ask the model to produce some code based on natural language. Based on a comment (text), you specify the result you want, hit a shortcut key, and the AI wakes up and provides you with some code.
It is trained on the platform, and knows what kind of record you are in and thus what you might expect. It understands the context:
When it doesn't know, it guesses:
It is TRYING, but it doesn't know any different. Although it should possibly know I don't have a table called PLAY, bless it.
Text-to-Code in ServiceNow is a specific use case built and tuned for a specific domain. It is no good for Othello. But is VERY good at telling me the things it is trained to do.
Now Assist Text-to-Code is VERY BIASED. And that's exactly what we need, sometimes.
#PhilGoesDeep
NOW Jedi, Sr Technical Consultant at Wrangu, ServiceNow MVP
1 年I reckon we will have in a near future tools to proactively build bias detection and analysis on the model development - challenging but not impossible to overcome. My concern is more in the privacy aspect - how far can or are willing to go with AI? This reminds me the struggle between systems on premises and cloud systems, privacy is always going to be a hot topic but eventually we will find a middle ground where we can met the minimum standards of privacy and get the best from AI, that’s my 2 cents!
Like the idea of the article Phil but I’m not sure I really agree the you’re usage of “bias” here. I believe that training an AI model for a certain purpose is different than the model or even data being biased.
Technical Director at UP3 Services Ltd
1 年It’s been a while since I did a good hackathon, we should follow up on our chat a while ago about doing a joint one, would be great fun.
The text to code function looks really cool! I can see it becoming a time saving tool in the future for sure and can’t wait to see it improve over time!