Confronting the Elephants: Unresolved Issues in Australia's Privacy Framework
James Patto
??Your friendly neighbourhood Australian {Privacy & Data | Cyber | AI} legal professional...????????| LinkedIn Top Voice?? | Speaker?? | Thought Leader??|
The elephants ?? at Melbourne Zoo may be moving—but in Australian privacy law, some metaphorical elephants remain firmly in place.
If navigating privacy laws in Australia feels like sailing through murky waters, that’s because we are. Recent legislative changes have made ripples, but they haven’t materially shifted compliance obligations or provided the clarity businesses need.
The Privacy and Other Legislation Amendment Act 2024 (POLA Act) was introduced not to burden regulated entities with significant new compliance requirements, but to strengthen enforcement through the OAIC’s new powers and introduce a statutory privacy tort (among other things). Ironically, these reforms should still be a wake-up call for businesses, as they set the stage for increased regulatory scrutiny and a shift toward a more litigious privacy landscape – it’s certainly not a ‘do nothing’ situation.
But beyond the POLA Act, key issues remain unresolved, leaving businesses, regulators, and consumers in a state of uncertainty. It’s time to address the elephants in the room. ??
Elephant #1: The Uncertain Future of Privacy Reform
For years, we’ve been waiting for a clear roadmap for privacy reform. The government has signalled its commitment, but concrete action remains elusive.
Labor’s Privacy Act Review outlined significant changes, yet many of its recommendations were only agreed to “in principle”, with further consultation required—kicking the can down the road on the major reforms that would require a privacy uplift across the economy. Even low-hanging fruit—such as good data hygiene and governance—was left untouched in the POLA Act. Now, businesses are left without a clear timeline for the next phase of reforms.
Adding to the uncertainty, the next federal election is due before June 2025, and the future of privacy law could shift dramatically depending on the outcome. The Liberal Party was highly critical of the POLA Act, arguing that additional “red tape” would place an unnecessary burden on Australian businesses—especially during a cost-of-living crisis. If the Liberals return to government, their historical resistance to increased regulation could result in further delays—or even a complete reshaping of the reform agenda.
With no bipartisan clarity on what’s next, businesses are stuck in limbo—unsure whether to invest in compliance upgrades now or hold off until the political landscape settles.
What’s needed is a clear and detailed reform timeline from the government, so businesses can plan proactively. At the same time, the Coalition should provide transparency on which reforms they support—allowing organisations to prepare for what’s ahead, rather than being caught off guard by shifting political priorities.
Elephant #2: A Stretched Regulator
The Office of the Australian Information Commissioner (OAIC) plays a crucial role in enforcing privacy law, but it remains an under-resourced regulator facing an ever-expanding remit. While its enforcement powers have been strengthened under the POLA Act, its funding and capacity have not kept pace with the growing complexity of privacy risks and regulatory demands.
The Privacy Act Review has proposed significant reforms, including expanding the Privacy Act to small businesses, introducing new privacy rights for individuals, and tightening obligations on data handling. Each of these reforms, if implemented, will add significant regulatory responsibilities to the OAIC’s workload. Meanwhile, Australia has experienced a sharp rise in high-profile data breaches, requiring more enforcement activity, increased guidance, and deeper engagement with industry.
Despite these mounting pressures, the OAIC has received only modest increases in funding, often allocated in short-term budget cycles rather than providing a sustainable long-term resource base. The challenge is that privacy risks don’t operate in budget cycles—they continue to evolve, and without consistent investment, the OAIC risks being stretched too thin to deliver proactive enforcement. This could leave Australian businesses without clear guidance and consumers without meaningful protection.
A key concern is that privacy enforcement remains reactive rather than proactive. The OAIC does not have the resources to conduct large-scale investigations into systemic privacy violations unless a major breach occurs. This creates a risk-based enforcement gap, where organisations may prioritise compliance only after a crisis, rather than embedding privacy governance before issues arise. The OAIC has made strategic choices to focus on high-impact cases, but without more resources, this approach limits the number of investigations and the ability to set regulatory precedents.
To address these challenges, the OAIC has outlined a strategic approach in its 2024–25 Corporate Plan. This includes prioritising regulatory activities that address high-risk matters with the greatest potential for harm, promoting compliance through guidance and support, and leveraging technology to enhance efficiency. The OAIC also emphasises the importance of collaboration with other regulatory bodies and stakeholders to maximise its impact.
The OAIC has increasingly relied on collaboration with international data protection regulators, particularly in investigations involving large global technology companies. Working with regulators such as the UK Information Commissioner’s Office (ICO) and other European data protection regulators (among others) has allowed the OAIC to benefit from better-resourced agencies conducting in-depth investigations. This global cooperation helps streamline enforcement and align regulatory responses, but it is not without its challenges. Relying on international partners means investigations may not fully reflect Australian-specific privacy concerns, and enforcement actions are often driven by priorities set overseas rather than locally.
There are solutions available. First, stable and sustained funding would allow the OAIC to expand its enforcement capabilities beyond a purely reactive approach. Second, investing in regulatory technology—such as automated privacy audits and AI-driven compliance monitoring—could improve efficiency and oversight. Finally, the government should explore closer collaboration between the OAIC and other Australian regulators, such as the ACCC, ASIC, and eSafety Commissioner, to ensure a whole-of-economy approach to digital regulation. This should also encompass a clear and defined strategy for international data protection regulator engagement and collaboration.
And that’s not even talking about AI-related privacy risks, which are their own elephant set out below.
Elephant #3: The Over-Collection and Retention Epidemic
In Australia, handing over personal information has become an expectation—whether you’re booking an appointment, making a purchase, or simply grabbing a coffee. Yet, many organisations still operate under a “just in case” mindset, collecting vast amounts of personal data without a clear purpose. Outdated record-keeping laws, fragmented compliance obligations, and uncertainty about deletion requirements reinforce this habit, creating massive data stores with no structured governance.
This approach exposes organisations to unnecessary risk. Recent cyber incidents have demonstrated how excessive data retention turns companies into prime targets for cybercriminals. Once breached, large, poorly governed datasets amplify the fallout—fuelling identity theft, fraud, regulatory scrutiny, and reputational damage. Despite these risks, many Australian businesses continue accumulating personal data, often with no clear retention or deletion strategy. Or they have a clear strategy, but lack proper implementation and operationalisation of that strategy.
But data governance isn’t just about minimising risk—it’s about maximising value and resilience.
The Value of Strong Data Governance
A well-governed data environment doesn’t just protect an organisation—it enables it to unlock the full value of its data assets while ensuring compliance and operational efficiency.
Clear and structured data governance frameworks provide business-wide clarity on what data can and can’t be used. This eliminates internal friction between legal, IT, compliance, and business teams—allowing innovation and decision-making to proceed without regulatory uncertainty slowing things down.
For businesses that rely on data-driven insights, governance transforms data from a risk to a strategic advantage. Instead of navigating unclear policies and legal grey areas, teams can focus on leveraging clean, well-structured, and compliant data to drive insights, improve customer experiences, and create value.
Even with the best cybersecurity measures in place, data breaches are inevitable. When an incident occurs, organisations with strong data governance are in a far better position to respond quickly and effectively, reducing legal, regulatory, and reputational fallout.
A well-governed data environment enables:
? Faster legal and operational response – Knowing exactly what data has been compromised, where it was stored, and who it belongs to allows teams to act swiftly rather than scrambling to piece together incomplete records.
? Clear understanding of regulatory obligations – With structured data governance, businesses already know which privacy laws apply, which regulators need to be contacted, and what disclosure requirements must be met.
? Streamlined stakeholder management – Knowing which customers, suppliers, and third parties have been affected—and what contractual obligations exist—enables organisations to communicate transparently and fulfil legal obligations efficiently.
? Lower regulatory and reputational risk – Demonstrating strong governance and compliance during a breach investigation can reduce regulatory penalties and public backlash, showing that the organisation acted responsibly even in the face of an incident.
The Need for Urgent Reform
Australia has been promised an economy-wide review into data retention laws, but there’s still no transparency on its scope or timeline. Without clear guidance on what data must be kept, what can be deleted, and how long organisations should retain personal information, businesses are left to navigate conflicting obligations—often leading to excessive retention out of fear of non-compliance.
The longer this uncertainty persists, the more Australian businesses will continue accumulating unnecessary data, leaving them vulnerable to cyber risks, regulatory scrutiny, and operational inefficiencies.
The solution isn’t just waiting for law reform—businesses must take control now. Investing in robust data governance isn’t just a compliance exercise—it’s a way to future-proof operations, unlock efficiency, and respond decisively when challenges arise. By proactively managing data as a strategic asset rather than a liability, organisations can move from reactive risk management to proactive value creation—all while ensuring they’re ready for the inevitable challenges of the digital economy – and the complex laws that come with it.
领英推荐
Elephant 4: The AI Privacy Paradox
AI is transforming the privacy landscape at breakneck speed. From data scraping and facial recognition to automated decision-making and inferred data profiling, AI introduces new and complex privacy risks that Australia’s laws are ill-equipped to handle. The Privacy Act Review, while comprehensive, predates the AI boom, meaning the existing regulatory framework is struggling to keep pace with AI-driven data practices.
The risks are not theoretical—we’ve already seen controversial AI use cases expose uncertainty in Australia’s regulatory framework. Examples like Clearview AI’s large-scale data scraping, the use of de-identified patient data by I-MED, and the increasing reliance on AI-powered hiring and credit assessment tools have raised serious concerns about whether existing privacy protections are fit for purpose in an AI-driven world.
Bunnings: A case study in complexity
In a recent determination, OAIC found that Bunnings Group Limited breached the Privacy Act 1988 (Cth) by collecting personal and sensitive information through facial recognition technology (FRT) without adequate consent or transparency. Bunnings implemented FRT across multiple stores to enhance security and protect staff and customers from theft and anti-social behaviour. The system captured facial images of individuals entering the stores, comparing them against a database of persons of interest. Non-matched images were deleted within milliseconds.
Despite the security intentions, the OAIC concluded that the use of FRT was disproportionately intrusive, collecting sensitive biometric information without individuals' knowledge or consent. The Commissioner emphasised that deploying such technology interfered with the privacy of all individuals entering the stores, not just high-risk individuals. Bunnings was ordered to cease the use of FRT in breach of the Privacy Act and to destroy all personal information collected through the system.
This case underscores the complexities at the intersection of AI innovation and privacy regulation. While AI technologies like FRT can offer significant benefits, their deployment must be carefully balanced against privacy considerations. The OAIC's decision highlights the necessity for organisations to:
Bunnings plans to appeal the OAIC ruling. The company contends that the technology was implemented solely to enhance the safety of team members and customers by deterring repeat offenders and preventing unlawful activity. Bunnings argues that the OAIC's decision does not adequately consider the context of increasing retail crime and the necessity of employing advanced security measures to protect both employees and customers. The company maintains that its use of facial recognition technology was a proportionate response to these challenges and that sufficient measures were in place to minimise privacy impacts, such as the immediate deletion of non-matching facial images.
Yet, despite these growing concerns, the regulatory response has been slow, fragmented, and under-resourced—leaving businesses to navigate the grey area between AI innovation and privacy compliance with limited guidance.
The OAIC’s Reluctance to Investigate Large AI Developers
A major issue is the OAIC’s enforcement capability when it comes to large, global AI developers. The resource imbalance between international AI companies—often backed by billions in funding—and the OAIC’s limited budget and staff means that taking on major AI privacy investigations is a daunting task for the regulator.
Despite having significant enforcement powers under the Privacy Act, the OAIC has shown reluctance to launch investigations into large AI companies, especially those headquartered offshore. The reality is that AI developers—whether it’s global tech giants or fast-growing AI startups—operate at a scale and speed that outstrips traditional regulatory processes. The sheer complexity of AI models, combined with cross-jurisdictional legal issues, makes it difficult for a single national regulator with finite resources to effectively hold AI companies accountable.
This has real consequences. Without proactive enforcement against AI-driven privacy violations, businesses are left in a regulatory grey zone, unsure whether certain AI applications comply with Australian privacy law. This uncertainty stifles responsible AI development, while bad actors may take advantage of weak enforcement to push the limits of data exploitation.
If the OAIC continues to be the default privacy regulator for AI-driven harms, it needs far greater resources—both financial and technical—to keep pace. But even with more funding, there’s a broader question: Should AI privacy regulation be left solely to the OAIC, or does Australia need a dedicated AI regulatory body?
At this stage, the OAIC is effectively navigating its resource constraints by collaborating with international regulators to oversee large AI developers. This strategic approach allows the OAIC to leverage global expertise and promote consistent data protection standards across borders. For instance, the OAIC's joint investigation with the UK's Information Commissioner's Office into Clearview AI exemplifies such cooperation. However, while this method maximises regulatory impact, it may lack a distinct Australian perspective, potentially overlooking local nuances in privacy concerns and cultural expectations.
The Challenge of AI-Specific Privacy Risks
Even if the OAIC were adequately resourced, AI-specific privacy risks don’t fit neatly into Australia’s existing legal framework. Unlike traditional data processing, AI:
While guidance from regulators like the OAIC can help, guidance alone isn’t enough. Australia needs regulatory certainty on AI and privacy—whether through explicit legal reforms, an appropriate mechanism where binding rulings can be issued or a dedicated AI regulatory framework that provides clear guardrails for AI developers and users.
Regulatory Divergence: The Global AI Landscape
Australia is falling behind global peers when it comes to AI regulation. The EU’s AI Act is setting a new global benchmark, introducing risk-based AI governance and strict requirements on high-risk AI applications. The US and UK, on the other hand, have opted for a lighter-touch, industry-led approach, prioritising AI innovation over strict legal controls.
If Australia fails to act decisively, we risk regulatory fragmentation—where Australian businesses are caught between complying with strict AI laws overseas while facing little regulatory certainty at home.
The next federal election will be critical in determining Australia’s AI regulatory approach. If a Liberal government takes a “pro-innovation” stance, AI regulation may lean towards self-regulation and industry codes—effectively shifting more responsibility onto the OAIC to enforce AI-related privacy risks with limited resources. This would further stretch the OAIC, making it even less likely that major privacy violations would be investigated.
Conversely, a Labor government might push for stronger AI regulation, though details remain vague. Without clear direction, businesses remain in limbo, unsure whether to prepare for stricter AI rules or assume a hands-off approach will continue.
What Needs to Happen?
AI’s rapid evolution demands more than piecemeal privacy law updates—Australia needs a cohesive AI regulatory strategy that balances innovation, privacy protection, and legal certainty.
To get there, we need:
? Stronger regulatory coordination – AI privacy risks shouldn’t fall solely on the OAIC. Australia needs cross-agency collaboration, potentially involving ACCC, ASIC, and the eSafety Commissioner, to address AI risks from multiple angles.
? Clear, enforceable AI privacy rules – Privacy guidance is helpful, but legislation must clarify how AI-specific privacy risks are regulated, particularly around inferred data, de-identification, and automated decision-making.
? Investment in AI enforcement capacity – If the OAIC is expected to play a leading role in AI privacy regulation, it needs significant funding and expertise—including data scientists, AI engineers, and legal experts in AI governance.
? A national AI strategy that includes privacy protections – Australia’s AI policy should not just focus on innovation; it must also embed strong privacy and data governance principles from the outset.
The question is: will Australia proactively shape AI privacy laws, or will we wait for a crisis before taking action? ??
What Happens Next?
With 2025 fast approaching, Australia has an opportunity to bring much-needed clarity to privacy regulation. But for that to happen, the government, regulators, and businesses must work together—defining clear reform timelines, strengthening enforcement, and ensuring privacy protections evolve alongside technological change.
The question is: will we finally address these elephants, or will they continue to take up space in the room? ??
#PrivacyLaw #CyberSecurity #AI #DataProtection #AustralianLaw #Compliance #Regulation #DataGovernance #DigitalTrust #PrivacyReform #AI #ArtificialIntelligence #Privacy
Cybersecurity for Law Firms. I help protect your people, client & sensitive data & assets from cybercriminals, freeing you to focus on what matters — closing deals, winning cases & serving your clients with confidence.
3 周A well presented article James Patto. I think we will be seeing a few more elephants packed in to the room, before we see the light and they get a new home! Australia is definitely behind and the OAIC seems to be hesitant and constrained as they always have been to take action. This continues to frustratingly see Australian companies take a watch and wait approach and further confusion of course on what to do. The AI market which is accelerating at a massive rate will further muddy the waters and International legislation will come into play certainly for global operators based here. That said business shouldn’t have an excuse to do nothing, holding on to data for too long is a bit like going into your local Coles or Woolworths and finding out of date food on the shelves! So don’t hold on to data past its sell by date!