What Does VAR Teach us About AI?

What Does VAR Teach us About AI?

Earlier this month (Nov ’23), Newcastle defeated Arsenal 1-0 in a spirited Premier League (football/soccer) match.? The lone goal was with some controversy – the Video Assistant Referee (VAR) ruled on three key incidents leading up to the goal - which is rare, goals are not normally that contested.? As a (biased) Arsenal fan watching the game live, I believed VAR would overturn the on-the-field decision, and they did not.? Something seemed wrong with the process. Look, I, like many, jumped to some conclusions, but have been waiting for the detailed behind-the-scenes view of their real-time analysis before commenting.? That day has come.

NB: You don't need to like or know anything about the sport to read this article.

Thoughts on VAR

The league revealed their analysis here.? The analysis, to me, confirmed what I suspected watching it live, and still worry about today for VAR and elsewhere – I believe there’s a mismatch of decision making and accountability.

Before I get into it… I appreciate the transparency from PGMOL, the group that officiates Premier League matches.? They knew this was controversial and released it anyway; they’re not trying to hide anything.? And I must give the Video Referees credit; they did an efficient, thorough job with the tools they had at their disposal.?

In watching their analysis, in each of the three incidents they said given the camera angles they have, they could not convincingly overturn the on-the-field ruling, so the on-the-field ruling stands.? (Let’s ignore that they needed some additional angles…)? This makes sense; the intent of VAR was never to replace the on-the-field referees, rather be “a support tool for officials” (via FIFA website).? The head referee said the analysis followed the proper process – and I agree they did.? But what they didn’t discuss is the process itself, is the process right?? I argue no.

The process and orchestration between on-the-field and video is where the conflict arises IMHO.? If officials stop play for, let’s say an offside ruling that was incorrect, there’s no chance to recover – instead, the league wants the action to go on to its possible conclusion, then claw back the action if there was a foul that nullifies things.? And while I appreciate this “let the flow of game go on” approach, this is where the problem exists: there’s a lack of accountability.? On-field are deferring to VAR; VAR is deferring to on-field.? It doesn’t work.

The management structure and oversight seem to have drifted from the original intent here.? The international rules say, “the referee must always make a decision.”? VAR was to support on-the-field for a “clear and obvious error” and not substitute judgment for the referee.? And sure, while that’s the letter of the law, by allowing play to go on - an implicit deference to the technology to get it right, it makes sense that this rule is getting broken in practice.? It feels like referees aren’t taking a strong position to overturn something they see on the field because they’ve been told to let it play out; it feels like the technology has become a crutch of sorts.?

I think this is "The VAR Trap" at work. Over time, technology becomes a crutch, we start to rely on it more, even if we know implicitly that the technology was never designed to be in charge. For football/soccer, we need to get back to the original intent and use of VAR. Right or wrong, incentivize the referees to make calls on the field.? Get back to the roots.

What does this have to do with Artificial Intelligence (AI)?

We’ve been using AI for years to help us find signals through the noise; find cases where things seem out of bounds.? Specifically in security, we use AI to find fraudulent credit card transactions, find network behavior that seems abnormal, et cetera.? And that’s awesome; it allows us to focus on value added activity, such as confirming and handling erroneous conditions rather than just searching for it.?

My fear is that as we have expanded AI capabilities lately, AI too could become a crutch, especially in security – well the system says it’s not fraud, so it must not be fraud.? Security is a game of cat and mouse – as we learn emerging fraudulent Tactics, Techniques, and Procedures (TTPs), we enhance our detections to catch those new TTPs, and then the fraudsters adjust to overcome the new defenses.? It would be very easy for us to fall into “The VAR Trap,” deferring to the technology rather than take accountability ourselves.? And we need to remember – we must not and cannot do that!

Look, I get it.? It feels like we’re all being asked to do more with less support today – and AI advances are coming at a perfect time to help us focus and align our resources efficiently.? It’d be easy to just trust the technology.? But we have to remember that technology is there to aid us; it's not (normally) designed to lead.? AI is just a tool; it’s not 100% accurate.? We need to remember that in the decision-making process.?

We've learned these management lessons time and again, from the military to large corporations - lines of accountability must be clear. In the case of AI, the ultimate accountability must reside with the user, not the system. And the user must be empowered to make those decisions, right or wrong. Mistakes happen, and we learn from them; that's how we gain experience. Relying on technology too much will reduce those experiential gains; and we must resist that.

Simon Reiniche

Cybersecurity_Executive, Advisor, Investor

1 年

Great article! As an Arsenal fan myself, I’m still bitter about this loss…

回复

Tech has always been a crutch, and that's ok. Others of my age will remember being taught in school math that "you won't always have access to a calculator", which I'm typing as I glance at my iPhone. As technology evolves, how much we rely on those crutches does as well. It's veering into hard core philosophy territory to debate what degree is right and what is too much (hint: if it involves an Elon Musk monkey-murdering brain chip, it's probably too much. Beyond that, we can debate.) This is why good AI rules specify what military theory refers to as "man-in-the-loop". AI is a fabulous analytic, automation, and assistive tool to aid in decision making, but a critical decision ultimately must rest with the human. In your VAR analogy, this would mean that the decision whether or not something looks dodgy enough to send to automated analysis in the first place could be given over to a sufficiently intelligent robot without problem, but the final call, and thus accountability, must rest with the person, as you correctly point out. What "critical" means is also up for debate - the EU AI Act has good guidance on that. But anyone who even implies that an automated intelligence, no matter how smart or complex, is an idiot.

回复

要查看或添加评论,请登录

Michael Silverman的更多文章

  • Advice on Starting to Learn Data Science

    Advice on Starting to Learn Data Science

    People over the last few months have asked how to start learning #datascience given the explosion of news around…

    1 条评论
  • ChatGPT's Likely Impacts on Malware and Fraud

    ChatGPT's Likely Impacts on Malware and Fraud

    I am seeing a lot of headlines around ChatGPT lately; some seem accurate, and some seem like click-bait and fear…

    8 条评论
  • What does it take to hold a large event in 2022?

    What does it take to hold a large event in 2022?

    In the summer of 2021, we as a team felt that holding in-person events in 2022 seemed possible. If we’re going to do…

  • Real Considerations on AI for CIOs

    Real Considerations on AI for CIOs

    Artificial Intelligence (AI) and Machine Learning (ML) are buzz words making their way across companies, conferences…

    2 条评论
  • Agile 101: Acceptance Criteria vs. Definition of Done (DoD)

    Agile 101: Acceptance Criteria vs. Definition of Done (DoD)

    When I am teaching agile to new group of students, I find one of the biggest questions is: what does it mean to be…

    3 条评论
  • What is Innovation?

    What is Innovation?

    A friend asked me the other day, how do I define “Innovation?” Yes, there is the technical definition to create…

    7 条评论
  • IT Definitely Matters

    IT Definitely Matters

    In 2003, Nicholas Carr wrote in Harvard Business Review that "IT Doesn't Matter," that investments in IT do not deliver…

  • Does Agile "Speed up" Project Work?

    Does Agile "Speed up" Project Work?

    Teams moving to Agile commonly ask me if it will “speed up” their project over Waterfall. I find that teams making the…

    5 条评论

社区洞察

其他会员也浏览了