When AI Codes: A Developer's Existential Crisis

When AI Codes: A Developer's Existential Crisis

Introduction

Python, the current darling of programming language, is booming in business, thanks to its simplicity, versatility, vast libraries, and the added bonus of being open-source. As an actuary, currently between contracts, brushing up on Python seemed like a practical way to spend some downtime. Little did I know this journey would become an unexpected deep dive into the astonishing capabilities of Artificial Intelligence (AI) and spark a very real question: How safe are our jobs?

The Problem

Let me take you back to my childhood. I used to spend hours playing a Solitaire card game; usually to pass time on lazy Sundays or during long flights. The game was deceptively simple but frustratingly hard to win. Whether you won or not is also determined entirely from the initial card configuration. Perfect fodder for a coding challenge, right? Here’s how it works:

1.????? Shuffle the Deck: Start with a shuffled standard 52-card deck.

2.????? Deal Initial Cards: Deal 4 cards face-up to create 4 piles.

3.????? Merge Matching Ranks: If two or more cards have the same rank (e.g. two Kings), place the right-hand card(s) on top of the left-hand card(s).

4.????? Deal More Cards: Deal 4 more cards from the deck, one on each pile.

5.????? Repeat Merging: Again, merge any matching ranks into the left-hand pile(s) continuing to merge until all face-up cards are of different ranks, or the piles are empty.

6.????? Handle All-Match Scenarios: If all 4 cards dealt have the same rank (e.g. four 7s), remove those cards from the game.

7.????? Continue Until Deck Is Empty: Keep dealing and merging until the deck runs out.

8.????? Gather the Piles: Gather the piles from right to left, forming a single stack.

9.????? Repeat the Process: Use the new stack and repeat steps 2-8 until you win or find the game unwinnable.

You win by removing all cards from the deck. Most of the time, though, the game loops into an unwinnable state. For example, if you’re stuck with cards in the order [Ace, Ace, Ace, 2, 2, 2, 2, Ace], the final Ace will never meet the others because it’s blocked by a 2.

My Solution

Excited, I rolled up my sleeves to code this challenge. Python has many handy features to make the task easier:

  • The random.shuffle function helped me create random card configurations.
  • Adding or removing items in a list was a breeze with methods like append and pop. No need to continually redefine the size of arrays!
  • Detecting repeating configurations (a telltale sign of a losing game) was straightforward by creating a list of lists containing all previously seen card configurations and checking if the new configuration was in the list.

The tricky part? Testing for wins and losses. A single game takes 45-60 minutes to play out, so debugging was slow. To simplify, I started with a smaller problem: games with only two card ranks (e.g., Aces and 2s). With just 35 possible combinations, it was manageable to verify manually. But as soon as I added a third card rank, the possibilities exploded to 5,775. Testing the full deck? Forget about it.

I ran 250,000 simulations and found a win rate of 19.1%. That felt high. A bit of online research showed someone else had calculated 8.7%; a figure that better matched my childhood experience. Clearly, I’d made a mistake. But where?

Enter ChatGPT

At this point, a friend suggested, “Why not let ChatGPT take a crack at it?” My first reaction? Skepticism. Surely this problem was too nuanced for AI. But hey, it was worth a shot.

I fed ChatGPT the game rules and asked it to write Python code. The initial attempt wasn’t great; the bot’s win probability was 0. But here’s the cool part: I could debug the AI’s code by asking it to print intermediate outputs, like the deck after each deal. This revealed where it was misunderstanding the rules—for example, it wasn’t discarding matching cards.

After half an hour of back-and-forth clarifications, ChatGPT nailed it. The win rate? 8.7%. Using its code, I pinpointed the bug in mine. What had taken me 4-5 hours had taken 10% of the time using ChatGPT.

And here’s the kicker: ChatGPT’s code wasn’t just accurate—it was elegant. It broke the game into clean, modular functions (e.g., one for simulating a round, another for the entire game, and a third for running multiple simulations). While I’d used separate lists for each pile, ChatGPT grouped them into a list of lists. Smart.

Am I redundant?

That’s the million-dollar question, isn’t it? Should I be worried that an AI solved this problem faster and better than I did? Honestly, it’s a mixed bag.

Yes, ChatGPT was impressive. It interpreted the rules, wrote functional code, and corrected mistakes—all in under an hour. But it didn’t do it alone. I had to define the problem, debug its work, and guide it towards the solution. AI lacks intuition. It can crunch numbers, but it doesn’t know if the result makes sense. My childhood memories of endlessly looping games? That’s a level of insight ChatGPT just can’t replicate.

To test its limits, I gave ChatGPT problems from Project Euler (https://projecteuler.net/) - a site where coders solve math challenges with code. The first challenge asks you to find the sum of all multiples of 3 and 5 below 1,000 and the problems quickly escalate from there. I myself have been solving problems on this website for over 15 years, when I first learnt how to code, and am proud to be in the top 1% of all problem solvers.

But enough about me, how would ChatGPT do? For simple tasks, such as the multiples one given, or challenged to find the first term of the Fibonacci sequence with 1,000 digits, ChatGPT breezed through. But tougher problems that required deeper mathematical reasoning? Not so much. I could’ve guided it to the right answers, but the process wouldn’t be quick or seamless.

So, What Now?

As an actuary, I’m not panicking. My job involves more than just crunching numbers; it’s about interpreting data and applying experience-based judgment. For example, I have a rough idea of a healthy 65-year-old’s life expectancy or the monthly cost of life insurance for a 30-year-old. ChatGPT might handle the data, but it needs someone to tell it what the data means and how to use it.

That said, ChatGPT has changed the game. It’s a powerful tool that can help me write better code, faster. While it’s not replacing me anytime soon, it’s definitely forcing me to up my game. Like all tools in the actuarial arsenal, it needs to be used in the right way to maximise its potential.

And nothing will be able to take away from the joy of coding and working out your own solutions.

Conclusion

ChatGPT isn’t ready to take over the world—or our jobs—just yet. But it’s a glimpse of what’s coming. As with every technological leap—from computers to self-checkout machines—we have two choices: adapt or be left behind.

Me? I choose to adapt.

And in case you’re wondering, yes ChatGPT did help re-word this article for me and came up with the headline. At the end of the day, I’m an actuary, not a journalist!

Daniel Lewis

Mathematics PhD student at University of Arizona

1 个月

Nice article, Marc! Have you tried GitHub Copilot, or the AI-suggestions (Gemini) generated when using Google Colab? I find the latter an enormous timesaver, used appropriately.

回复
?? Adam Joe Parker ??

ReFounder, OceanSaver - helping 1,000,000 people to clean plastic-free and Ocean-friendly

1 个月

Super interesting ??????????

回复

要查看或添加评论,请登录

Marc Wiseman的更多文章

  • Machine Learning using FIFA 2019

    Machine Learning using FIFA 2019

    Introduction to Machine Learning The term Machine Learning was coined in 1959 by Arthur Samuel, an American pioneer in…

    4 条评论
  • Predicting the Indian Premier League

    Predicting the Indian Premier League

    Overview The Indian Premier League (IPL) is an annual Twenty20 cricket competition that takes place each year during…

    7 条评论

社区洞察

其他会员也浏览了