It is hard to believe but later this week, on Thursday, 30 November, it will be exactly one year since ChatGPT was released. I'm not sure whether it is harder to believe that it’s only been one year or that it’s been one year already. So much has happened and yet we are still at the start of the journey. So many people have not yet even used ChatGPT while for others it has become their daily companion. And ChatGPT is not the only game in town.
To mark the anniversary, I'm doing two things:
In mid-October we released a report called Beyond ChatGPT: State of AI in Generative Practice and I’ll be running a webinar about the report and some of the developments since then on Thursday.
I also wanted to take this opportunity to ask what lessons have people learned so far and what are they looking forward to about generative AI?
I created a survey to help me gather a bit more structured data about how people in digital education are thinking about ChatGPT now that it's been with us for a year.
If you want an incentive to take the survey, many of the questions can also be used as suggestions for how to use AI, which tools to look at, and ideas for reflection.
My personal 4 lessons and 1 hope for the future are:
- Despite all the hype, Generative AI is something real and is already having real impact. This is fairly unique in the history of tech or edtech. Usually, we only see this much hype around things that prove to be much less transformative (from interactive whiteboards to VR). One of the reasons for this is that the success of ChatGPT was mostly driven by users and not companies. It is easy to forget, but ChatGPT was released as an experiment and OpenAI were expecting 100,000 users in the first week – not the 1,000,000 they got. And so many of the lessons we’ve learned have come from the users just trying to see what else these tools can do.
- It would be a mistake to just focus on ChatGPT, there are many other tools out there. Some that I use ever day are: Claude – for summarising papers, Perplexity – for AI-assisted search, Mid-Journey – for image generation. Google Bard – for image interpretation.
- It is not useful approaching generative AI from a negative perspective – trying to catch it out – it is much more productive to look for opportunities. But once we have found the opportunities, we need to look at them much more soberly and with a realistic eye – it is so easy to be misled by successes in one task into assuming it will also work elsewhere.
- Hallucination is a real and unsolved problem and prompt engineering matters. Since last year, we’ve seen the release of the much more powerful GPT4 and Claude 2. Yet, they still hallucinate – a lot. And even though the power of these new chatbots is that you can just talk to them, formulating the prompts more appropriately can have a big impact on what you get.
- The thing that I’m most excited about is the potential for generative AI as assistive technology and bridging the accessibility gap. Multimodal models like GPT4 can do a really great job of describing images or even whole interfaces already and I can’t wait for this to be integrated in tools like screen readers. If you want to see what I’m talking about, just take a screenshot of something and paste it into Google Bard (if you don’t have ChatGPT Plus). And if you want to see an example of the difference it is making in people’s lives now, watch this video about Be My Eyes.
Empowering leaders through: Executive Education at Sa?d Business School, Oxford University | Sustainability | Women Empowerment | Executive Coaching | Resilience, Wellbeing Coaching | Embodied Mindfulness &Yoga for Execs
12 个月This is an excellent article Dominik Lukes. Thank you. I agree it's not helpful to view AI from a negative perspective. It can be a great tool for accelerating and improving human output if used ethically. I use ChatGPT almost daily and it's been a gamechanger. It's just the minimum and there are so many more helpful AI tools that we can now utilise