Are you cosy on the AI plateau?
When generative AI hit us last year, I had strong feelings of fascination and fear but the fascination definitely outweighed the fear. The fear came from the idea that we moved from GPT-3.5-class to GPT-4-class models so fast, that the very near future looked very unpredictable.
And then more recently came this idea that we'd reached somehow a plateau: it was going to be much harder for OpenAI and friends to make the next real leap in ability. I think many of us have a feeling that developments had stalled a little we can start to get used to what we've got.
That's been quite cosy in a way. We've actually had time to work out useful ways of employing these assistants.
But this Microsoft presentation from a couple of days ago says exactly the opposite and that they see no immediate end to exponential growth in capability. (Considering his techy audience, he uses a shark-orca-whale graphic which is very low on information but very high on atmosphere.)
He talks about how the current models are of course being used in the development of the next generation of models (singularity anyone?) So now my initial fears starting to outweigh the fascination again: what will fundamentally more capable models mean for human employment, engagement and fulfilment and dignity.
The current generation of models is just perfect for me because I do qualitative text analysis and they provide me with a room full of willing and tireless assistants.
We won't be thinking of the next generation of models as assistants at all but as authorities, agents, experts.
It's been cosy on this little plateau but how long will we stay here? The future isn't going to be just more of the same.