Cyber Brief for CFOs: January 2024

Cyber Brief for CFOs: January 2024

Each month, the team at Eftsure monitors the headlines for the latest accounts payable (AP) and security news. We'll bring you all the essential stories in our cyber brief so your team can stay secure.

NAB issues warning about AI voice scams

NAB warns that there’s a rising threat from AI voice impersonation scams in Australia, where a mere three-second audio clip from social media or voicemail can be used to mimic a loved one's voice.?

This fraud technique is expected to play a significant role in the 70% of Australian scams that rely on impersonation. Scammers create urgency to prompt quick action, such as a distressed phone call from a supposed family member requesting money. Fraudsters can also gain remote access to computers via phone calls or web chats, stealing large sums of money within minutes. Other prevalent scams include term deposit frauds mimicking banks with attractive returns and phishing attacks using malicious QR codes.

Explore voice impersonation scams and other AI-enabled fraud tactics – as well as how to protect yourself – in our 2024 Cybersecurity Guide for CFOs.?

Cybersecurity tops list of CEOs’ AI fears

A recent global survey from PwC reveals that 64% of CEOs see cybersecurity as their biggest concern about generative artificial intelligence (AI). The survey encompassed nearly 5,000 CEOs worldwide, conducted between October and November 2023.

Escalating cyberattacks likely have played a role in this anxiety. By 2025, damages from such attacks are forecasted to reach an astonishing $10.5 trillion annually, a staggering 300% increase since 2015, according to a McKinsey report – the surge in generative AI risks coincides with rapid deployment of new AI products by many firms.?

The PwC survey also found that CEOs expect generative AI to heighten misinformation risks within their own organisations.?

Deepfake-related crime is here, but prosecution faces challenges

Illustrating that the post-trust era of AI is already here, the Commonwealth Director of Public Prosecutions (CDPP) has been handling evidentiary briefs involving the use of deepfakes. But there are likely to be challenges in prosecuting deepfake-related crimes.?

Deepfakes – that is, AI-manipulated images or ‘synthetic media’ – have become a major concern and can be used for nefarious reasons like fraud or misinformation. The CDPP's main issue lies in the limited legal avenues under the federal criminal code for deepfake cases, leading to a maximum sentence of only three years' imprisonment. Although harsher penalties exist for private sexual material, proving such cases with deepfakes can be difficult. Additionally, the CDPP struggles with resource constraints and lengthy processes required to access and analyse data on suspects’ devices, potentially hindering criminal prosecutions.

VIC courts hack threatens delays and sensitive leaks

Victoria's court system experienced a major cyber attack in December, marking the first of its kind in Australia's justice system. This incident aligns with a global trend in which official systems are increasingly targeted for sensitive information. The attack, suspected to be a Russian ransomware hack, compromised video and audio recordings from Victoria's Supreme Court, including the Supreme Court of Appeal, County, Magistrates', Coroners' courts and potentially one Children's Court hearing. This pattern mirrors similar breaches in countries like the US, Brazil, Chile and even the International Criminal Court in The Hague.?

Court Services Victoria (CSV) confirmed the breach was limited to audiovisual records, with no access to employee or financial data. However, with AI-powered attacks like voice scams on the rise, the risk of malicious use is high. Organisations will need to think twice about the audiovisual records they store, and finance leaders will need to think about how they can be used in scams.

St Vincent hack shows elusiveness of compromised credentials

Last month, hackers infiltrated St Vincent’s Health Australia using sophisticated methods that involved compromised accounts not found on the dark web – suggesting a targeted attack by advanced criminals.?

While the compromised accounts used in the breach have been identified, they remain absent from dark web marketplaces, indicating a higher level of elusiveness and difficulty in tracking. The attack on St Vincent’s mirrors similar methods used against private health insurer Medibank and other Australian organisations.

It also highlights the complexity and stealth of cybercrime networks, which manage to stay under the radar by selling or reusing stolen login credentials. This sophisticated approach allows hackers to maintain access to infected systems and avoid detection, posing significant challenges for security professionals and increasing risks of cybercrime and fraud.?

Hamish Armstrong

Group Financial Controller at Sport New Zealand

10 个月

Interesting insights around the increase of deepfake cyber fraud. What steps are organisations putting in place to allow authentication of communications between senior staff or loved ones that may actually be impersonation scams?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了