What has A.I. got to do with making beepy noises with kids?
Noise Solution
Using music and technology to contribute to a world where everyone means something, and they know it
Noise Solution’s Operations Director Damien Ribbans has been at Dreamforce this week, the largest A.I event in the world.?
Here, he talks about how A.I will integrate with how we work at Noise Solution.
You will have heard all the acronym-based hype about A.I., LLMs, generative A.I. and things like ChatGPT .??
And if you haven’t, you soon will.??
The rapid pace of A.I.?
To give you some idea of the speed of growth of ‘artificial intelligence’, it took Facebook 4.5 years to reach 100 million users; Chat GPT has achieved this in just two months with an estimated user base in 2024 of 514 million. Like it or not, ‘artificial intelligence’ is going to fundamentally change the way we do things.?
I have put inverted commas around ‘artificial intelligence’ because, it isn’t really intelligent, or at least not sentient. A.I. simply means a computer is using massive algorithms to mimic the cognitive functions we associate with the human mind and learning from the outputs.?
While A.I. might appear to be able to be quasi-human, that isn’t the case. Having said that, the ability of these models to process vast amounts of data almost instantaneously does make them extremely powerful, and as the market penetration rates above suggest, the technology is evolving at a very rapid pace.??
Why A.I. Presents big possibilities at Noise Solution?
Here at Noise Solution, we geek out (a lot) about impact and evidence and how we can continue to push the boundaries of what is possible in the social impact space. This has already seen us win a bunch of awards and be shortlisted for a bunch more (see this article for details). A.I. models present significant possibilities here, as we can crunch more data than ever in ever-more nuanced ways.??
At DreamForce 2023, I (Damien) presented details of our latest project taking advantage of A.I. and Large Language Models (LLMs).??
Massive shoutout here to our technical partners at VRP Consulting who have, again, taken one of our crazy ideas and turned it into digital (non-artificial) reality.?
For those of you not lucky enough to be at DreamForce in San Francisco (did we mention the Foo Fighters are headlining the conference party?) here is a rundown of the work we have been doing, which I’ve been talking about there and some idea of the next iterations.?
Data tells our story??
We have a pretty good handle on capturing, measuring, and managing quantitative (number) impact data. Our participants complete a Wellbeing questionnaire at the start and end of a set of sessions with us. This gives us an evidence-based view of the self-reported levels of wellbeing of our participants, along with a national dataset which we can compare our data against; We are able to compare by age, gender, location, contract or any combination of those. We are also able to carry out live statistical analysis looking for statistical significance and range of change. That helps us ask what is the probability of changes being attributed to Noise Solution’s work, and how much change was there???
Spoiler, our work is highly statistically significant with a moderate to high range of change – so it’s massively impactful, statistically.?
The quantitative data is great as it gives us a good benchmark, but it is only part of the picture. After every session, musician and participants capture and share a short reflection video, talking about the session they have just had. At the end of a set of sessions we ask family and professionals to complete short video or audio questionnaires reflecting on their experiences (using something called Video Ask by Typeform it's ace). All these data are held within our Salesforce ecosystem in our ‘universal bucket of truth’, all held securely, and all connected. We can drill down into any one of those quantitative datapoints (also known as a person!) and show you their story, told by them. The story behind the numbers; that’s the important bit, right??
It's all impact gold dust – so where’s the hiccup??
All these qualitative data are jam packed with impact gold dust; how did you feel, what did you do, how did it impact you? Video is so much more impactful and empathetic than feedback forms with smiley faces and 1-10 scales.??
The problem here is, though, that extracting those impact data and making any sense of them at scale is almost impossible. It would be a full-time job to sit and watch every video, pull out the bits that you think are important, note them somewhere else and then try to make sense of it all. It would also be subjective, and based on the thoughts or opinions of whoever is watching it at the time (or your epistemological and ontological perspectives shaping your research philosophy, if you want to get fancy). A.I. can really help us out here, and we have built a model to do that.??
Turning data into something usable – automatically??
Our model can grab all those reflection videos, either from the participant or those around them, strip out the audio and transcribe it to a text file. We can then send those files to an LLM of our choosing (currently GPT3 or GPT4, although we hope to take advantage of Einstein GPT from Salesforce as it comes on stream). Those files are sent with a carefully crafted prompt designed to elicit insights on things that are important to us (autonomy, competence, and relatedness) from our Self Determination Theory-based Theory of Change. These are then returned to our Salesforce org as discrete values, along with some scoring and the most positive and negative sentence from the transcription. All done automatically.?
Don’t worry, it’s all safe!?
“But wait, what about the security of sensitive information? You can’t send that across the internet willy-nilly!” Of course we can’t, and we don’t. Before we send the transcripts, we strip out and obscure the Personally Identifiable Data (PII) to maintain compliance and ethical responsibility. All the sensitive data remains within the Salesforce security bubble.
That’s all very well, but what does it actually mean???
Think about it; every case study or report about a contract you have ever read or written happens after the piece of work has finished. You pick the best two or three people, grab some soundbites and quotes, a couple of nice pictures then send it off to the funder. Any insights into how it actually went are limited at best, and too late to do anything about them. With our new model we can take qualitative data and get insights in the moment, meaning we can react to them and ultimately drive-up impact??
More than this, though, because we are returning some of these insights as discrete values (rather than just text responses) we can report over time, and see how people’s experiences of autonomy, competence and relatedness changed as our programme was delivered.??
Paving the way for early reaction?
Once we build this dataset, we will then be able to start looking at predictive models; using these data to predict where we will and more importantly will not have the impacts we hope to see, so we can react before the event.??
Pretty exciting huh? It is in its very early stages, but we hope to be able to build and refine this model for use by others so you can take advantage of it too and drive up your own impacts.?
We mentioned at the start that we had future iterations planned too. (Not so) Fun fact – charities in the UK alone spend 15.8 million hours reporting annually. That is a significant amount of time and resource we could be spending on more impactful work, making organisations more efficient and being able to focus resources where they are needed most. We think we have a bit of a plan to help with that, but that’s for another article…
Catalyst, Mentor & NED -
1 年Great article (and you looked like a veteran Silicon Valley executive ??).