Preferred office location:
Please select your nearest office location so we can show you the most relevant information.
Your preferred office location
Our Managing Director, Murray Thornhill, discusses AI and the law today.
Australia to date currently takes a “layered approach” to regulating artificial intelligence—sometimes called a “Swiss cheese” approach relying on existing laws and updated statements of policy and voluntary statements of principle. Over time, gaps in the law are identified. As it is a rapidly changing space, being too prescriptive with legislation too early can curtail innovation and send the wrong market signals. Rather than creating new AI-specific laws, governments and policy makers are watching the space. Courts are applying existing legislation case by case. However, there are many advocating for a far more prescriptive, specific legislative framework for AI.
The government released 10 voluntary AI safety guardrails in August 2024, covering areas like accountability, risk management, and human oversight. While these guidelines aren’t legally binding, but they are increasingly recognised as setting standards, and it is likely Australian courts will recognise those standards as cases come before them.
The guardrails in many ways reflect existing legal principles in relation to duty of care, governance, privacy, safety, commercial conduct, employment law, competition and contract. They will continue to inform the development of the law in these areas. All organisations and government levels should be using the guardrails when developing policy and making decisions on AI use.
However, it also means that to some extent, as with most technological disruption, businesses are navigating largely uncharted territory, with clarity on many issues yet to come from the Courts.
Recent Australian court cases show these aren’t theoretical concerns. In January 2025, the case of Valu v Minister for Immigration saw a lawyer face professional sanctions after using ChatGPT to generate a legal brief that cited 16 completely fabricated cases. The court made clear that AI use must be “responsible and consistent with existing obligations.”
Similar decisions in Murray v State of Victoria emphasise that lawyers have a fundamental duty to verify AI-generated content. Courts are treating AI output like work from a junior team member—it requires proper supervision and fact-checking before use.
These cases establish that ignorance isn’t a defence, and professional negligence claims are very real possibilities across all industries, not just law.
AI models are trained on vast amounts of copyrighted material, often without explicit permission. In Australia, copyright protection requires “intellectual effort” from a human author. Using AI-generated content might inadvertently infringe on others’ intellectual property rights, potentially exposing your business to legal action.
Many business agreements now include AI restrictions buried in fine print. Clauses like “no artificial intelligence use without express written authorisation” are emerging. The catch? You might already be breaching these through everyday tools like Microsoft Office autocomplete or Google Workspace features.
AI can and does produce convincing but entirely false information—what the tech industry calls “hallucinations.” Your professional indemnity insurance may not cover errors stemming from AI use, leaving you exposed when client work contains AI-generated mistakes. Large language model platforms also display biases of all kinds, partly as a result of the information sourced and prioritised and partly because of the kinds of questions and useage. These issues emphasise the priority of human oversight, controls, policy, enforcement and governance, all of which are reflected in the 10 guardrails.
Using AI tools often means feeding sensitive information into systems that may store, process, or even train on your data. This can create Privacy Act breaches if you’re handling personal information without proper consent or safeguards.
AI pricing algorithms or recommendation systems could inadvertently create anti-competitive behaviour, particularly if multiple businesses use similar AI tools that lead to price coordination.
Document every AI tool your business uses, whether authorised or not, including hidden features in existing software. Survey your team about personal AI tool usage for work purposes. Create an inventory that includes obvious tools like ChatGPT and subtle features like email auto-suggestions. You can’t manage what you don’t know is already there.
Search existing agreements for terms related to “artificial intelligence,” “automated,” or “algorithmic” processes. Focus on your top five client and supplier contracts first. Negotiate realistic terms rather than accepting blanket AI bans that may be impossible to follow. Consider your own terms of service and whether AI obligations and risks should be addressed. In most industries there should be AI risk management in contracts, even if this is focused on the 10 guardrails and warranties around quality and human oversight.
Treat all AI output as draft work requiring human review before external use. Create clear workflows where qualified staff must verify facts, check references, and approve AI-assisted work before it reaches clients or the public.
Establish clear policies about what information can be processed through AI tools. Create separate workflows for handling personal information, confidential client data, and commercially sensitive material. Ensure your team understands the difference between internal productivity tools and external data processing.
Speak to your insurance broker and ensure you understand risk coverage in relevant professional indemnity and directors and officers liability policies as to whether they cover AI-related errors and use. Update client agreements to include clear terms about AI use and limitations. Document your AI governance processes to demonstrate reasonable professional standards.
Ensure staff understand AI limitations and your organisation’s policies. Provide specific guidance about when AI use is appropriate and when it isn’t. Create regular training updates as technology and legal landscape evolve.
Establish monthly reviews of AI use and compliance. Maintain records of approved AI applications and verification processes. Subscribe to legal updates about AI regulation—this area is changing rapidly.
Most AI tools process your input to improve their systems. This means client information, business plans, or personal data you enter may be stored indefinitely or used for training purposes. Always check the terms of service before using any AI tool with sensitive information.
Establish clear categories of information that can and cannot be processed through AI tools. Implement technical safeguards where possible, such as using on-premises AI solutions for sensitive work or anonymising data before AI processing.
The Privacy Act requires reasonable steps to protect personal information. Using AI tools to process personal data without appropriate safeguards could constitute a breach of your privacy obligations, particularly if that information is then used for purposes beyond the original collection.
Australian businesses that establish proper AI governance now will have significant competitive advantages as the regulatory framework develops. The key is balancing innovation with responsibility—using AI to enhance your operations while meeting the professional standards courts are already expecting.
Remember that AI regulation is developing alongside the technology itself. Rather than waiting for perfect clarity, focus on implementing robust processes that demonstrate reasonable professional care. This approach protects your business while allowing you to harness AI’s benefits for growth and efficiency.
The businesses thriving in this new landscape are those treating AI governance as a strategic advantage with guardrails, rather than a compliance burden. They’re the ones that will shape how Australian AI regulation develops, rather than simply reacting to it.
This information serves as a general guide and does not constitute legal advice. It is based on our research and experience at the time of publication (29 August 2025). Please consult our knowledgeable legal team for any specific inquiries or advice relevant to your circumstances, as the content may not have been updated subsequently.
Contact us
Please select your nearest office location so we can show you the most relevant information.