Gen AI Network Melbourne meetup #2: thoughts
On Tuesday February 13 2024 we hosted the second Gen AI Network Melbourne meetup. Once again it was held at REA Group 's offices- we had a hard time finding a large venue to fit everyone in! Thanks to REA for stepping in to support. With over 90 people in attendance, it was a great opportunity to connect with others interested in the space and learn some cool things!
The first talk was given by James Bardsley of In Marketing We Trust , and was focused on how to use LLMs for SEO content creation in an effective way. Below are my notes for the talk:
To set the stage, content (that is, text/copy and images) is super important for search engine optimisation (SEO), which is why, for example, you often have to read a ten minute story before you get to a recipe on cooking websites. Without that content, the website is less likely to rank well on search engines like Google, and if it doesn’t rank well, traffic to those sites will be greatly reduced. Not only does content need to exist, however- the content needs to be fresh/relevant. You can’t just write gibberish, or have stories that are entirely unrelated; search engine ranking algorithms are sophisticated enough to detect this and will punish you for including it. James then moved into a case study about a large, well known travel website, where he and his team worked to deliver content using large language models (LLMs)- think OpenAI’s ChatGPT or Google’s Gemini.
This website had 4800 site pages that needed content refreshed on it- these pages had stale content, and traffic to these pages was dropping. To refresh content for all these ways using traditional methods- that is, a human copywriter- would cost around 5 million dollars. This is obviously an enormous outlay, and so when the opportunity to experiment with LLMs to potentially do it at a much lower cost was proposed, it was a no-brainer.
James and his team then used a GPT to write content for the top 10 most popular pages on the site as a small scale test rollout, and then used GPT translate to translate those pages into other languages. And they saw good results! The output was promising and the traffic did not decline after the content was updated on site.
Was it that simple? Seems like you could just plug all the articles in and call it a day- right? Unfortunately, as with all things, if it seems too good to be true, that may just be the case. The team ran into a bunch of challenges: - GPT couldn’t stick to the brand guidelines the company needed it to - Translation was poor; the GPT was very literal in its translation, and this led to some incorrect results- think proper nouns being translated when they shouldn’t have been - The content was at times entirely made up; the GPT suffered from hallucinations. Hallucinations are a situation where a GPT, in trying to generate an answer, and where the dataset it is working off is problematic, provides false or misleading information
Obviously, all three of these are problematic, and so the team needed to work on how they might solve for them. The actions they took included:
So the process is a lot more complicated than it initially was conceived to be, but it’s also a lot easier and cheaper than if a human wrote all those articles by hand and without support.
Finally, the result was 12% increase in SEO traffic, and the cost was 1/10 of doing it manually. Sounds like a good application of AI if you ask me!
领英推荐
The second talk was presented by Lilly Ryan and Ned Letcher , both of 思特沃克软件技术(中国)有限公司 . Their talk focused on reframing LLMs so that you have realistic expectations of them. The following are my notes from their talk:
We ascribe LLMs characteristics of humanity, even though they’re not human, and that can muddy our expectations of what they say. LLMs say weird things sometimes ( remember the hallucinations we talked about earlier?). (Side note: great example of this is the rise of Romantic AI chatbots- you can read about them and the dangers associated with them here.)
This is partly due to automation bias- people trust technology even when their knowledge tells them they shouldn’t (there has been lots of awareness raised about the way GPTs work and how they aren’t sentient or able to reason yet there are plenty of people who take everything GPTs say as gospel.
So, LLMs can’t be controlled at the meaning level, because they don’t have a concept of meaning. You can, however, massage how you communicate with it to get good outputs (think prompt engineering). You can also teach LLMs so that they produce results more closely aligned with what you’re looking for (a couple of methods to think about here include RAG, self supervised learning, and human reinforcement learning).
When teaching LLMs it is important to always measure and evaluate results. If you don't do this, how can you tell if what you're doing is supporting your end goals?
So, if you pick your uses for LLMs, don't trust them blindly, teach them as appropriate, and validate outputs, there are some use cases where LLMs can actually be great tools to use!
Overall it was a great event, and I’m looking forward to the next one! If you’re keen to follow along feel free to join our meetup here.
Sales Director (APAC, EMEA & US)
1 年How often do you do these Orian?