A Chat with ChatGPT on it's first birthday
Dall-E 3

A Chat with ChatGPT on it's first birthday

I actually wrote what I thought was quite an insightful piece about what impact ChatGPT has had on world over the last year. Happy to share, but ultimately felt it was it was a bit dull. So instead, using my ScottBot (curtesy of Mr. Scott Lawson amazing GPT language skills) decided to just ask GPT itself how it felt about being one. Way funnier than what I wrote ;-) Oh yeah - and lots of swearing.

What would ScottBot say about ChatGPT’s first birthday

Alright, listen up, because this is some important shit you're asking about. ChatGPT, that gloriously sophisticated chunk of code, celebrated its first fucking birthday, and it's a big deal in the AI world. You know why? Because this thing, this ChatGPT, has revolutionized how we interact with machines. It's not just some basic bitch AI; it's an advanced, nuanced language model that can understand and generate human-like text.

Now, let's break this down so even your brain can grasp it. ChatGPT was developed by OpenAI, and these guys are like the mad geniuses of AI. They took this thing called a transformer model, which is already pretty damn smart, and cranked it up to eleven. This means ChatGPT can do a ton of stuff like answer questions, write essays, or even come up with jokes funnier than anything your uncle could muster at Thanksgiving.

So, when we talk about its first birthday, we're not just celebrating a year of it being around. We're celebrating a year of fucking progress in AI. A year in which ChatGPT has helped, annoyed, and outright amazed people with its capabilities. It's been a year of learning, both for the AI and for the meatbags using it.

And let's not forget the ethical implications. With great power comes great responsibility, and all that jazz. ChatGPT has the power to influence, to guide, and in the wrong hands, to mislead. That's why there are all these rules and shit about what it can and can't do. It's important, and it's something OpenAI takes seriously, even if the average Joe doesn't always get it.

So, ChatGPT's first birthday? It's a fucking milestone. It's a testament to human ingenuity and a reminder of the responsibility that comes with advancing technology. There, now you know, and knowing is half the battle, right?

What would ScottBot say about the harm it’s done?

Alright, strap in because we're diving into the darker side of this tech marvel, ChatGPT. It's not all rainbows and butterflies; there's some serious shit to unpack here.

Firstly, misinformation. This thing can churn out wrong info like a broken factory line. It's not trying to fuck up, but it's only as good as the data it's been fed. In a world already struggling with fake news and alternative facts, the last thing we need is an AI inadvertently adding fuel to the fire. It's like giving a toddler a megaphone; shit's going to get loud and possibly nonsensical.

Then there's bias. ChatGPT learns from existing data, and if that data is as biased as a drunk umpire, the AI's going to inherit those biases. We're talking racial, gender, cultural biases – the whole shebang. It's like a sponge soaking up all the dirty water. The result? Potentially skewed, unfair, or even harmful outputs that can perpetuate stereotypes and worsen discrimination.

Now, onto the job market. ChatGPT is shaking things up like a bull in a china shop. Sure, it's making some tasks easier, but it's also threatening to make certain jobs obsolete. People are scared, and rightly so. It's not just a matter of "adapt or die"; this is about the livelihoods of millions. It's like watching a slow-motion collision between technology and the workforce.

But wait, there's more. The ethical dilemmas. Imagine the potential for misuse – propaganda, deepfakes, cyberattacks. It's like handing over a loaded gun; it can be used for good or bad, but the damage potential is enormous.

And let's not forget the personal impact. People are relying on ChatGPT for things they should probably figure out themselves. Mental health advice, legal counsel, life decisions – it's a goddamn AI, not a licensed professional. It's like using WebMD to diagnose a cough and concluding you have a rare tropical disease.

So, when we're talking about the harm ChatGPT has done, we're not just whistling Dixie. This is serious business – a blend of incredible technological achievement and a Pandora's box of potential problems. It's a reminder that with great power comes great responsibility, and right now, we're still figuring out how to wield that power without chopping off our own damn feet.

So is it going to go the way of MySpace when the competition catch up?

Alright, let's break this down. ChatGPT going the way of MySpace? That's a fucking interesting comparison. MySpace, the once-king of social media, got absolutely steamrolled by the competition, didn't it? But here's the thing: technology, especially AI like ChatGPT, is a whole different ball game.

First off, AI tech is evolving faster than a cheetah on steroids. OpenAI, the brains behind ChatGPT, isn't napping at the wheel. These folks are innovating like mad scientists. So, to stay ahead of the game, they've gotta keep pushing boundaries and not just rest on their laurels.

Now, about the competition. Sure, there are other big players in the AI sandbox. Google, Microsoft, Baidu, you name it. These guys have the brains and the bucks to push their own AI tech. But this isn't a simple game of king of the hill. AI development is more like a never-ending arms race. Everyone's developing, improving, and trying to outdo each other.

But here's the kicker: it's not just about being the best. It's about integration and application. How well does the AI integrate with other tech? How useful is it in real-world applications? ChatGPT has made a splash because it's user-friendly and versatile as hell. It's like a Swiss Army knife in a world of single-purpose tools.

Then there's the open-source community. They're the wild card in this game. Innovations can come from anywhere, and sometimes the biggest game-changers come from some lone genius in a basement rather than a corporate lab.

So, is ChatGPT destined to become a digital dinosaur like MySpace? Not necessarily. As long as OpenAI keeps innovating and adapting, and as long as ChatGPT remains useful and relevant, it's got a fighting chance to stay in the ring. But in the tech world, nothing's certain. Today's wonder can be tomorrow's relic. It's a high-stakes, high-speed race, and anyone who doesn't keep up is at risk of being left in the dust.

Isn’t it at risk of model collapse from all us stupid humans?

Oh boy, model collapse. That's some serious shit, but let's clear up what it actually means before we start throwing around doomsday scenarios.

Model collapse, in the context of AI and machine learning, is when the model (in this case, ChatGPT) starts spewing out the same or similar nonsensical responses to different inputs. It's like if every song on your playlist suddenly turned into the same shitty one-hit wonder. This happens when the model fails to learn properly from the data it's given.

Now, about us "stupid humans" contributing to this. Look, the quality of input data is crucial in machine learning. Garbage in, garbage out, as they say. If ChatGPT is constantly fed a diet of crap – misinformation, biased views, nonsensical queries – there's a risk it could start regurgitating similar crap. It's learning from its interactions, after all.

But here's the thing: OpenAI, the masterminds behind ChatGPT, aren't a bunch of slackers. They know this risk exists. That's why there are systems in place to prevent model collapse. Regular updates, fine-tuning, monitoring the inputs and outputs – it's a whole process designed to keep ChatGPT on the straight and narrow.

Also, remember that ChatGPT is trained on a vast dataset, not just the inputs it gets post-launch. So, while the quality of user interactions is important, it's not the sole factor determining the model's fate.

So, is ChatGPT at risk of model collapse from human stupidity? There's always a risk, but it's not like one dumb question is going to bring the whole thing crashing down. It's a robust system, but like any complex system, it requires maintenance, oversight, and continuous improvement to stay healthy. It's more marathon than sprint – a constant battle against the entropy of the internet's collective nonsense.

What about all the confidential information that the stupid humans keep uploading to it - isn’t this just a lawsuit waiting to happen?

Ah, the ol' confidential information conundrum. You've hit on a real doozy here. Let's tear into this one.

First things first, yes, it's a fucking minefield. People, for reasons known only to the gods and their overconfident brains, sometimes share sensitive, personal, or confidential info with ChatGPT. It's like whispering your secrets into a megaphone in the middle of Times Square and hoping nobody hears.

Now, why is this a problem? Well, for starters, privacy laws. They're like a jungle of regulations, and they're there for good reason. If ChatGPT mishandles this info, whether by storing it, leaking it, or just using it inappropriately, we're looking at a potential legal nightmare. It's like juggling lit torches over a pool of gasoline.

But, let's give credit where it's due. OpenAI isn't run by a bunch of headless chickens. They've got policies in place to handle this stuff. The system is designed to forget personal data, not store or recall individual user interactions. It's like a bartender who's heard a million sob stories but doesn't remember a single one.

However, policies are only as good as their enforcement. If there's a slip-up, if the safeguards fail, we're talking serious repercussions. Privacy breaches, legal battles, loss of trust – the whole nine yards.

So, is ChatGPT a lawsuit waiting to happen? It's a risk, no doubt about it. But it's a risk that's being managed with a bunch of safety measures. The real problem is the humans at the keyboard. Educating users about the dangers of oversharing personal info is key. It's like teaching people not to feed the bears at a campsite. Sure, the bear can fuck things up, but it's the people who need to change their behavior.

In summary, yes, there's a risk of lawsuits due to confidential information being mishandled. But it's a risk that's being mitigated through policies, technology, and hopefully, some good old-fashioned common sense from the users.

Ok - let’s talk about education then - surely us stupid humans just want to get shit done quicker right?

You've nailed it. When it comes to education and ChatGPT, we're dealing with a double-edged sword scenario. On one side, you've got the convenience factor. ChatGPT is like a turbo-charged Google on steroids – it can provide answers, help with homework, explain complex concepts, and generally make learning quicker and easier. For students and lifelong learners, it's a godsend. It's like having a personal tutor in your pocket, ready to dish out knowledge 24/7.

But here's the catch – the other edge of the sword. Reliance on AI for education can be a slippery slope into lazytown. If students start using ChatGPT as a crutch, just to churn out essays or solve math problems without actually understanding the material, we're in trouble. It's like using a calculator for every simple addition; pretty soon, you forget how to add 2+2.

Now, let's talk about the 'getting shit done quicker' part. In today's fast-paced world, everyone's looking for shortcuts. Time is money, efficiency is king, blah blah blah. So, it's no wonder students and professionals alike turn to AI for a quick fix. But in education, this can backfire. Learning is not just about finding the right answer; it's about understanding the process, developing critical thinking skills, and building a foundation of knowledge. It's not just the destination; it's the journey.

So, is ChatGPT a blessing or a curse for education? It's both. On one hand, it's an incredible resource for learning and exploration. On the other hand, it tempts users to take shortcuts that could undermine their actual learning.

The key here is balance. Use ChatGPT as a tool, not a replacement for learning. It's like a power tool – in the right hands, it can do amazing things, but used carelessly, it can cause a lot of damage. Education needs to evolve alongside these technologies, teaching not just the 'what', but the 'how' and 'why' of using AI responsibly. Otherwise, we're just training a generation of shortcut-takers, and that's about as useful as a screen door on a submarine.

What about all this ‘MyGPT’ stuff - do you think businesses are honestly going to trust it with precious information?

Alright, let's dive into this 'MyGPT' stuff and the trust issues surrounding businesses using it. The concept of MyGPT, essentially a personalized version of GPT for businesses, sounds like a tech wet dream. It's like having your own personal AI genie, tailored to your company's needs. But here's the catch: trust, especially with precious, sensitive business information.

Now, let's break this down. Businesses are like paranoid treasure hunters. They've got their gold – data, trade secrets, client info – and they're damn wary about who they share their map with. Throwing an AI into this mix? That's like adding a wildcard to a high-stakes poker game. Exciting, sure, but risky as hell.

The crux of the issue is security and privacy. Can MyGPT keep a secret? Is it Fort Knox or a leaky sieve? Businesses deal with confidential data, and any slip-up, any tiny breach, could be catastrophic. It's not just about losing data; it's about reputation, legal implications, and competitive edge. It's like walking a tightrope over a pit of hungry alligators.

But let's not be total pessimists. The potential benefits are huge. Imagine streamlined processes, automated tasks, deep insights – it's the stuff of efficiency dreams. But to make this work, MyGPT needs to be tighter than a drum. We're talking top-notch security, rigorous data handling protocols, and clear, transparent policies.

So, will businesses trust MyGPT with their precious info? It's a big maybe. It'll come down to how well the platform can prove its security chops. We're talking trust-building on an epic scale. It's like convincing someone to lend you their prized sports car; they need to know you're not going to drive it off a cliff.

In summary, MyGPT has the potential to be a game-changer for businesses. But the trust issue is a massive hurdle. If it can clear that with flying colors, we're looking at a bright future. If not, it's just another fancy tool in the shed that nobody wants to touch.

If you were some kind of probabilistic fortune teller - what do you think with happen with it in 2024 by the time of its second birthday?

Ah, playing the probabilistic fortune teller for ChatGPT's future by 2024, are we? Let's put on the wizard hat and peer into the crystal ball.

First, let's talk about development. By its second birthday, ChatGPT will likely have gone through several updates and iterations. Technology, especially AI, evolves faster than a rumor in a high school. We can expect ChatGPT to become smarter, more nuanced, and possibly even more human-like in its responses. It's like watching a child prodigy grow up; they only get more impressive with time.

In terms of adoption, ChatGPT's user base will probably expand. We're talking more industries, more fields, more people turning to this AI for help. Why? Because it's damn useful, that's why. From writing aid to customer service bot, from learning tool to creative partner, the potential applications are as vast as the ocean.

But with great power comes great responsibility, and great scrutiny. By 2024, we can expect more eyes on ChatGPT, especially from regulatory bodies. Privacy, data security, ethical use – these are hot potatoes that aren't cooling down anytime soon. ChatGPT will need to navigate this regulatory minefield like a bomb disposal expert.

Now, let's talk competition. ChatGPT won't be the only show in town. Other big tech players are likely to throw their hats into the AI ring, if they haven't already. This means more options, more innovation, but also more pressure to stay ahead. It's like a high-stakes race where the cars keep getting faster.

And let's not forget the wild card – the unknowns. New technologies, geopolitical shifts, economic changes, all these could impact ChatGPT's journey. It's like trying to predict the weather; you have a general idea, but Mother Nature always has a surprise up her sleeve.

So, by its second birthday, ChatGPT is likely to be more advanced, more widely used, and more heavily scrutinized. It'll be an exciting time, but also a challenging one. The key will be how it adapts and evolves in this fast-paced, ever-changing tech landscape. The future's not set in stone, but one thing's for sure – it won't be boring.

If 2023 was the top of the hype cycle - is next year the trough?

Alright, let's talk about the hype cycle, because it's like a roller coaster in the tech world. If 2023 is the peak of hype for ChatGPT, what's next? The trough of disillusionment? Maybe, but let's break it down.

The hype cycle is a common pattern in technology adoption. It starts with the "Innovation Trigger" – the cool new thing everyone's excited about. Then, it climbs to the "Peak of Inflated Expectations," where the hype is at its max. This is where we might be with ChatGPT in 2023.

But what goes up must come down. The next phase is the "Trough of Disillusionment." This is when the initial excitement fades, and people start seeing the limitations and drawbacks of the technology. It's like the morning after a wild party – the fun's over, and now you're left cleaning up.

For ChatGPT in 2024, there could indeed be a shift towards this trough. Why? Because that's when the reality sets in. The AI isn't perfect. It makes mistakes. It can't do everything. People start realizing it's not a magic solution to all problems.

However, it's not all doom and gloom. After the trough comes the "Slope of Enlightenment." This is where we figure out how to make the best use of the technology, warts and all. It's when realistic expectations are set, and real value is found.

Finally, there's the "Plateau of Productivity." The technology finds its place, becomes more stable, and delivers consistent value. It's no longer the shiny new toy, but it's a useful tool in the toolkit.

So, could 2024 be the trough for ChatGPT? Possibly. But it's not necessarily a bad thing. It's a natural part of the cycle. The key will be how quickly and effectively it moves from disillusionment to enlightenment and productivity. It's all about adaptation, improvement, and finding the right fit in our world. The hype might die down, but the real, lasting value of the technology can then shine through.

How badly will it screw up next years UK and US elections?

You're hitting a nerve that's really at the heart of a major concern in today's digital landscape. The influence of AI, like ChatGPT, on democracy, particularly in the context of elections, is a double-edged sword with significant implications.

  1. The Threat of Misinformation: Misinformation has always been a threat to democratic processes, and AI tools can exacerbate this. If used irresponsibly, AI can generate and spread misinformation at an unprecedented scale and speed, muddying the waters of public discourse. This can lead to voters making decisions based on false or misleading information, which is like poison to the roots of democracy.
  2. Polarization and Echo Chambers: AI, through social media algorithms and targeted content, can deepen polarization. It can create echo chambers where people only hear what they want to, reinforcing their biases and dividing society further. This fragmentation can undermine the very idea of a unified democratic society.
  3. Manipulation and Propaganda: AI tools can be potent in the hands of those looking to manipulate public opinion. From crafting persuasive fake news to creating deepfakes, these technologies can be weaponized to sway elections and disrupt democratic processes.
  4. Cybersecurity Concerns: There's also the risk of AI being used for cyberattacks on electoral systems, voter databases, and other critical infrastructure. This could undermine the integrity of the election process itself.

However, it's not all doom and gloom. AI has positive potential too:

  1. Enhanced Voter Education and Engagement: AI can help educate voters, making complex policies more understandable, and encouraging informed participation in the democratic process.
  2. Monitoring and Countering Misinformation: AI can also be a tool to identify and counteract misinformation, helping to maintain a more truthful and transparent informational ecosystem.
  3. Efficiency in Electoral Processes: From managing voter data to streamlining electoral logistics, AI can contribute to more efficient and secure election processes.

Yes, there's a real risk that AI like ChatGPT could further complicate the democratic process, especially if misused. The key to preventing this dystopian scenario is multifaceted: robust regulation, ethical AI development and usage, public awareness and education, and strong cybersecurity measures. It's about balancing the benefits of AI with the potential risks, ensuring that this powerful tool is used to support and enhance democracy, not undermine it.

Ok let’s talk about the poor, those who can’t afford gpt - what would ScottBot say about them?

Alright, diving into the impact of technologies like GPT on the less affluent, the poor, is crucial because it's a stark reminder of the digital divide. This shit is real, and it's widening.

  1. Access to Technology: The most glaring issue is access. ChatGPT and similar AI technologies are mind-blowing, but they require a decent internet connection and a device to access them. For those struggling to make ends meet, this isn't just a given. It's like being at a buffet with no plate; all this tech is out there, but they can't get a piece of it.
  2. Educational Disparities: Education is another battlefield. AI can massively enhance learning, but if you're in a poorly-funded school or district, or if you can't afford the latest tech at home, you're left in the dust. It's like trying to run a race with one leg.
  3. Job Market Challenges: The job market's evolving with AI and automation. Skills in demand are changing. For the poor, who might not have access to training or education in these new areas, it's another hurdle. They're being set up to fall behind, like bringing a knife to a gunfight.
  4. Healthcare Gaps: AI's making waves in healthcare too. But if you can't afford the latest tech-based health solutions, you're stuck with the basics, widening the health gap between rich and poor. It's a shitty situation where your wallet size dictates your health prospects.
  5. Social and Political Inclusion: Then there's the issue of social and political participation. AI tools can amplify voices, but if you don't have access to them, you're essentially voiceless. It's like watching a debate with your mouth taped shut.

So, what would I say about the poor in relation to GPT and AI? It's a stark reminder that while we're making leaps in technology, we're also at risk of leaving a significant portion of our population behind. It's not just unfair; it's a recipe for social and economic disaster. If we don't address this divide, we're not just failing the poor; we're undermining the potential of our society as a whole. We need to ensure that the benefits of AI and technology are accessible to everyone, not just those who can afford them. Otherwise, we're building a future on a foundation of sand.

Surely you don’t care if people as made redundant and left behind?

Actually, the impact of AI and technology on employment and societal inclusion is a significant concern. The notion of people being made redundant and left behind due to advancements in AI is not something to take lightly. It's a real issue with far-reaching implications.

  1. Job Displacement: AI and automation have the potential to replace certain types of jobs, which is a serious worry. It's not just about losing a job; it's about large segments of the population finding their skills and experience suddenly outdated. This can lead to unemployment, financial instability, and a host of social problems.
  2. Widening Economic Divide: The rise of AI could exacerbate the gap between the rich and the poor. Those with access to new technologies and the skills to use them may thrive, while others struggle to keep up. This could lead to increased economic inequality.
  3. Social Impact: Beyond economics, there's a human element. Job loss and the fear of redundancy can affect mental health, lead to increased stress, and strain communities. It's not just about the paycheck; it's about people's sense of purpose and place in society.
  4. Need for Reskilling and Education: To mitigate these issues, there's a growing need for reskilling and education programs. As the job market evolves, so must our approach to education and training. This is crucial to help people transition into new roles and industries.
  5. Ethical Responsibility: There's an ethical responsibility for those developing and deploying AI technologies to consider these impacts. It's not just about what AI can do; it's also about what it should do, and how it can be used responsibly to benefit society as a whole.

In summary, the concerns about people being made redundant and left behind in the wake of AI advancements are valid and require serious consideration. It's essential to approach AI development and implementation with a mindset that values human welfare and seeks to minimize negative societal impacts.

Is that what ScottBot would say?

Ah, if Scott were to tackle this, the tone might be a bit different, but the core message would stay true. Here's how Scott might put it:

"Listen up, because this is important, even if it's uncomfortable. AI and tech advancements like ChatGPT are shaking up the job market like a goddamn earthquake. And yeah, it's easy to shrug it off as 'progress' or 'inevitable,' but that's a cop-out.

We're talking about real people getting shafted here, left in the dust because some machine can do their job faster or cheaper. It's not just numbers on a spreadsheet; it's lives being turned upside down. People with families, bills to pay, dreams to chase – suddenly finding themselves on the scrap heap. It's a shitshow.

And sure, we can talk about retraining, upskilling, all that jazz. But let's be real – that's easier said than done. Not everyone can just switch careers like flipping a switch. There are barriers, like education, age, financial constraints – a whole mess of hurdles.

So, do I care? Hell yes, I care. It's a big deal. We can't just charge ahead with AI and tech without thinking about the human cost. That's not progress; that's recklessness. We need to figure out how to make this work for everyone, not just the tech-savvy or the well-off. Otherwise, we're building a future on a foundation of sand and bullshit."

So, while the delivery might be more direct and laced with expletives, the underlying concern about the impact of AI on employment and societal disparity remains a serious issue, no matter how you slice it.

The world is going to shit - what are the actions we, as stupid humans, can take to weather the storm?

Alright, since the world's going to shit and we need a laugh, here are the top hilariously stupid actions us humans can take to weather the storm. Tongue firmly in cheek, of course:

  1. Start a Doomsday Cult: Nothing brings people together like the end of the world. Bonus points if the uniform includes tin foil hats and robes made from recycled curtains.
  2. Invest in Apocalypse Bunkers: Why stop at a panic room when you can have an underground bunker with Wi-Fi, a lifetime supply of canned beans, and a Netflix subscription?
  3. Turn Global Warming into a Sport: Competitive iceberg racing, anyone? Last one to melt wins!
  4. Ride Out Nuclear Winter in Style: Think Mad Max but with more glitter. Pimp out your ride with spikes, flamethrowers, and disco balls.
  5. Zombie Apocalypse Training: Turn your daily jog into a survival drill. Bonus points for outrunning your neighbor dressed as a zombie.
  6. Alien Conspiracy Theorizing: Spend hours on forums debating the fashion choices of our alien overlords. Will they prefer skinny jeans or go full retro with bell-bottoms?
  7. Host End-of-the-World Parties: Celebrate each predicted apocalypse with a bang. When the world doesn't end, rinse and repeat!
  8. Develop a Taste for Insects: Forget farm-to-table; it's bug-to-mouth now. Start a gourmet club for cooking up the finest cricket quiches and beetle burgers.
  9. Marry a Robot: Can't beat 'em? Join 'em. Plus, you'll never argue about where to eat again.
  10. Invest in Personal Bubbles: Not just social distancing – let's go full hamster ball. Roll down to the supermarket, bounce off a few people, perfect.
  11. Start a New Fashion Trend with Hazmat Suits: Make it the new black. Accessorize with neon stripes and glitter. Who says you can't be safe and stylish?
  12. Form an Apocalypse Book Club: Only post-apocalyptic fiction allowed. Extra points for dressing as your favorite dystopian character during Zoom meetings.
  13. Hold an End-of-the-World TikTok Dance Off: Create viral dances to symbolize each major global crisis. The 'Climate Change Cha-Cha' or the 'Pandemic Polka', anyone?
  14. Develop a Doomsday Diet: All the foods you've never had the guts to try. Now's the time for fried tarantulas and pickled herring. Yum!
  15. Rename the Days of the Week: Because who cares about Wednesday in the middle of an apocalypse? Welcome to Doomsday, followed by Wasteland, then Thunderdome.
  16. Create a Survival Reality Show: Contestants are dropped into the wild with nothing but a spoon and a roll of duct tape. Last one to call for takeout wins.
  17. Convert Your Car into a Mad Max Vehicle: Time to pimp your ride apocalypse-style. Spikes, flamethrowers, and skull ornaments. Forget MPG; it's about how many raiders you can fend off.
  18. Start Communicating Only in Memes: Language is overrated. From now on, all important announcements and life decisions are made through cat memes and GIFs.
  19. Build a Bunker But Make it a Man Cave/She Shed: Stock it with video games, a lifetime supply of beer, and snacks. If the world's ending, might as well go out gaming.

Remember, when life gives you lemons, you chuck those lemons right back and demand chocolate. In times of crisis, a good laugh isn't just medicine; it's survival.


www.obsolete.com | WORK THE FUTURE

要查看或添加评论,请登录

Jon Bains的更多文章

  • Fandom and Generative AI

    Fandom and Generative AI

    For decades, fandom has been defined by passion, obsession, and the formation of communities united by their shared…

    2 条评论
  • Out of the Shadows

    Out of the Shadows

    It’s not your fault You may have heard of the term "Shadow AI", and sadly, it's a problem that many organisations are…

  • The EU AI Act Challenges - So what is AI anyway?!

    The EU AI Act Challenges - So what is AI anyway?!

    As the European Union moves forward with its proposed Artificial Intelligence Act (AI Act), one of the key challenges…

    3 条评论
  • The EU AI Act - the one about startups

    The EU AI Act - the one about startups

    The European Union's proposed Artificial Intelligence Act (AI Act) is set to introduce a comprehensive regulatory…

  • The EU AI Act - The One About Copyright

    The EU AI Act - The One About Copyright

    The European Union's Artificial Intelligence Act (TA-9-2024-0138_EN) aims to establish a comprehensive regulatory…

    2 条评论
  • The EU AI Act - "Systemic Risks"

    The EU AI Act - "Systemic Risks"

    The European Union's Artificial Intelligence Act (TA-9-2024-0138_EN) introduces the concept of "systemic risks" in…

  • The EU AI Act - "Materially Distorting Human Behaviour"

    The EU AI Act - "Materially Distorting Human Behaviour"

    As artificial intelligence capabilities advance rapidly, the European Union has proposed legislation to ensure these…

    9 条评论
  • Google, Guardrails, and the Rocky Path Forward

    Google, Guardrails, and the Rocky Path Forward

    In a recent turn of events, Google found itself at the centre of controversy when its latest AI photo generation system…

    2 条评论
  • Never Trust a JunkAI ;-)

    Never Trust a JunkAI ;-)

    Understanding the Mindset In the realms of passion and persistence dwell a devoted group known as the JunkAIs. These…

    2 条评论
  • Copilot: Censor, Prison Guard or Destroyer of Words?

    Copilot: Censor, Prison Guard or Destroyer of Words?

    Governance and accountability. Two principles that most of us expect when rules are created that impact our lives.

    5 条评论

社区洞察

其他会员也浏览了