
Friday, 10 April 2026
By Mark Armstrong and Heidi Bruce.
It is unavoidable. Artificial intelligence, or AI, is everywhere at the minute and it has infiltrated many aspects of the advertising industry. At the same time, despite its pervasiveness, there is a great deal of confusion and ambiguity when it comes to AI, and justifiably so. As legal advisors, we are seeing a significant increase in queries from clients seeking advice as to what is and is not acceptable regarding the use of AI systems, and the latest considerations for AI governance and contracts. Some of the major concerns are around intellectual property, privacy and other major legal risks, and who would be liable if a claim arose from the use of AI. The other key area of interest now, is what disclosures are required to be made when AI-generated material is used, and whether this changes depending on the media or the content. We will look at these issues here in more detail.
Typically, the law is a few steps behind new and emerging technologies. Currently, AI specific legislation or binding instruments are scarce in Australia and AI usage is still largely governed by existing laws, in place well before AI. However, the government and industry are taking steps to provide guidance to businesses to ensure AI is being used appropriately and ethically. If you are facing some apprehension around the legalities of AI and how it may affect your business, you are certainly not alone. Below, we have broken down where the advertising industry is at currently in terms of AI from a legal perspective, how to ensure you are meeting your legal obligations and whether disclosure is necessary when using AI and how you can take steps to insulate yourself from potential claims.
Laws, instruments and guidelines that may apply
Unlike in other markets including the European Union with its EU Artificial Intelligence Act or the USA with various state-based AI-specific legislation, currently, there is no single law that governs the use of AI systems in Australia. The Australian Online Safety (Basic Online Safety Expectations) Determination 2022 (Cth), does contain requirements for AI system providers to take reasonable steps to proactively minimise the extent to which generative AI capabilities may be used to produce material or facilitate activity that is unlawful or harmful, so any agencies developing their own AI systems must have safeguards to prevent this. The Commercial Radio Code of Practice enforced by the Australian Communications and Media Authority (ACMA) was also recently updated and as of July 2026, broadcasters of a regularly scheduled radio program or news program will be required to disclose if the program is hosted by a synthetic voice generated by AI. Apart from these, Australia’s legislation does not specifically prescribe how AI must be used. The Australian Government decided in October 2025 not to introduce an exemption under the Copyright Act to allow big tech companies the right to freely use copyrighted material for scraping and data mining to train AI, leaving existing copyright protection intact for rights owners. The Government has flagged there may be more reforms in future, such as a licensing regime and a small claims forum, and improving certainty over how copyright laws apply to AI-generated materials. In the meantime, this leaves the scope for AI systems to be potentially trained off the content of rights owners, with no specific regime in place for how to manage this.
Even with the current lack of AI specific laws, AI use is absolutely not a lawless wasteland with no real consequences. There remain considerable risks and legal issues, under existing laws that advertisers must be aware of including the following.:
- Intellectual property is protected under the Copyright Act 1968 (Cth) and the Trade Marks Act 1995 (Cth). Publicly accessible AI systems have been known to be trained on a very broad range of materials, including material containing protected intellectual property. As noted above, the rights of intellectual property owners remain intact under existing laws. Artists and creative work rights holders still have the right to protect against unauthorised use of their works, even if generated by AI. At the moment using AI-generated material carries infringement risk and the extent of that risk is difficult to assess. Many AI systems state that the user who generates the output owns the intellectual property in the AI-generated material, but then place responsibility on the user for any third party rights claims. This has the potential to bring with it liability if the AI system copies protected material. For example, if an AI-generated ad features an unrelated brand’s logo, the advertiser could face legal action from that brand for infringement of a registered trade mark.
- The Australian Consumer Law (ACL) in Schedule 2 of the Competition and Consumer Act 2010 (Cth) prohibits misleading or deceptive conduct and false or misleading representations. Advertisers must be cautious to ensure that all claims made in advertising materials are accurate and if claims have been generated or summarised by an AI system, the level of scrutiny must be extra heightened as AI hallucinations could unintentionally introduce incorrect or unsubstantiated claims. Additionally, testimonials must be truthful. For example, if an advertisement features an AI-generated actor providing a testimonial about a product, which is fabricated via AI-facilitated amalgamation, this would likely be in breach of the ACL. False affiliation is also prohibited under the ACL and similarly, ‘passing off’ is actionable under common law. Passing off occurs when a piece of intellectual property or valuable commercial name or reputation is referenced to leverage a commercial benefit and that reference falsely implies a commercial connection or affiliation where none exists. For example, if an ad featured an AI-generated version of a celebrity but there was no connection between that celebrity and the advertiser, there is a high likelihood of a cease and desist letter from the celebrity’s legal representatives.
- Privacy obligations exist under the Privacy Act 1988 (Cth) including ensuring that personal information is protected, and this carries across to the use of AI systems. This can be problematic when an AI system is trained using personal information and that personal information is inadvertently disclosed via inclusion in advertising material broadcast to the general public. The Office of the Australian Information Commissioner (OAIC) has specifically recommended that organisations do not enter personal information – and especially sensitive information, which relates to more highly personal matters including information relating to a person’s racial origins, political beliefs, sexual orientation or health) into any publicly accessible AI systems because of the associated high level privacy risks. This OAIC is best practice and should be followed by all AI users not only from a privacy perspective, but also due to the increasing number of contractual obligations prohibiting unauthorised disclosure of personal or confidential information. Notably, penalties under the Privacy Act are extremely high.
- Another consideration is confidential information. Much like AI systems that are trained using protected pieces of intellectual property, if confidential information is entered into an AI system, the consequences could be severe. If the AI system is trained on that confidential input, that information could potentially be made publicly available through an output generated by another user of the AI system. This could amount to a breach of confidentiality obligations in an agreement or potential breaches of legislation (e.g. a breach of privacy obligations) and the consequences could include damages or pecuniary penalties (i.e. fines). Accordingly, it is crucial that all staff are trained to not enter confidential information into AI systems and to have various checks in place to ensure any outputs are appropriate.
- Talent and performer rights apply also. The latest MEAA performer contract template (2024) includes a new AI provision that protects the rights of performers against the unauthorised use of their performance for AI related use or digital replicas.
In light of the above, it is crucial that all AI-generated materials are verified by a real human prior to publication, and consideration is given to AI related risks before production.
In addition to the above purely legal obligations, the National Artificial Intelligence Centre (NAIC), an Australian government body formed under the Department of Industry, Science and Resources, has developed various materials aimed to encourage safe and responsible use of AI. In September 2024, NAIC published the ‘Voluntary AI Standard’, which set out 10 AI ‘Guardrails’ – a set of practical guides relating to (among other things) accountability, risk management, human oversight, AI use disclosure and transparency. However, in October 2025, NAIC published the ‘Guidance for AI Adoption’, an updated and simplified guide to replace the Guardrails. The Guidance for AI Adoption contains 6 essential practices that AI users should implement and has been released in two versions: ‘Foundations’, designed for those that have newly adopted the use of AI or only using it in low-risk ways; and ‘Implementation Practices’, which is more technical in nature and designed for those operating on a higher AI implementation level, such as AI developers, governance professionals and technical AI experts. The 6 essential principles are identical in both versions, as set out below with further commentary from us:
- Decide who is accountable – This involves nominating key people, but also ensuring staff are adequately trained in the use of AI. This may include drafting an AI use policy;
- Understand impacts and plan accordingly – Businesses should assess who may be impacted by the use of AI and implement planning, monitoring and reporting systems so negative impacts can be identified and addressed;
- Measure and manage risks – This involves implementing AI-specific risk screening and management processes;
- Share essential information – This mainly relates to transparency and disclosure around when AI is used so end-users can understand when they are interacting with AI;
- Test and monitor – AI systems can be unpredictable so it is important to stress test them and continually monitor how they function; and
- Maintain human control – As above, ensuring humans can oversee the use of AI and override its function where necessary is crucial.
To confirm, these principles are voluntary and are not legally binding, but it would be prudent to adopt them where possible in AI governance processes and policies, as they provide meaningful guidance on best practice.
Do I need to disclose when I use AI?
The general answer under current Australian laws, is typically no, in that there is no specific rule or requirement to disclose whenever AI is used for public facing materials, such as an ad campaign. However, there are some evolving considerations in this space. Where the use of AI could result in viewers being misled, disclosure would be prudent. Going back to the example above of using an AI actor to provide a testimonial in an ad, if the testimonial is legitimate, we would strongly recommend adding a prominent disclaimer to make it clear that while the testimonial is real, the person delivering it is AI-generated, in a similar fashion to ads that use paid actors.
We note that several prominent social media platforms including Facebook, Instagram and TikTok have introduced AI disclosure functions that are embedded in their platform, championing a disclosure-forward disposition. For example, Meta, which operates Facebook and Instagram have an ‘AI info’ label that can be added to posts either by creators themselves or automatically by Meta if its systems detect that a post is AI-generated or modified by an AI system. However, Meta’s systems are not perfect and even non-AI-generated/modified posts have had an AI label automatically added. Given the mistrust people may have with AI-generated materials, it may be that Meta prefers to over-label than risk materials that are AI-generated are interpreted as being real. This goes in line with Meta’s policy that mandates AI disclosure for any AI-generated/altered advertisement relating to social issues, elections or politics. TikTok takes a similar approach and recommends that creators label content that is ‘either completely generated or significantly edited by AI’. Similarly to Meta, TikTok creators can add a label reading ‘creator labeled as AI-generated’, but TikTok may also add an ‘AI-generated’ label if it identifies as being fully or significantly AI-generated, including when a creator uses TikTok’s own AI effects. Both entities cite transparency for audiences as a key reason beyond the push for adding AI labels. However, this concept is not entirely new to social platforms with YouTube introducing mandatory AI disclosures in November 2023 for when realistic content is made with altered or synthetic media, including generative AI. In March 2024, a new tool was implemented into YouTube’s Creator Studio making it easier for creators to make such AI-related disclosures. Clearly, social media platforms are aware of risks involved in AI-generated materials and are taking a proactive approach to ensure user trust is not eroded when viewing content that appears realistic but is not. Brands and agencies using social media platforms to advertise or publish AI-generated materials are encouraged to adhere to each platforms guidelines for the same reasons, but also to ensure that content is not removed for breach of a platform’s terms of use.
NAIC has also encouraged a transparency focussed approach and have published guidance on ‘Being clear about AI-generated content’. This guidance takes the view that viewers should be informed when AI is used, especially if the generation of material was heavily reliant on AI or the potential impact is high, for example, where educational or health-related materials are being produced that humans would rely on to be accurate. NAIC holds that using clear transparency mechanisms, such as labelling, watermarking or metadata recording, builds trust in viewers and reduces regulatory and reputational risks. We have also seen PR backlash in the media where AI material is used and subject to criticism.
It may not be necessary to disclose if AI has been used in every instance, as this will depend on the context and associated risks of using the AI-generated material. However, businesses should be constantly assessing whether AI use disclosure would be prudent to make the material more trustworthy, or less prone to being challenged for being misleading. In any case, materials must be accurate.
Who is liable when things go wrong?
Ultimately, this question – like several other legal questions – can be answered with another question: ‘What does the contract state?’. There is so much ambiguity and numerous grey zones regarding how AI operates, who owns materials and who is liable, but more certainty on this can be achieved via a tightly worded contract that specifically deals with the use of AI systems and AI-generated materials. Obviously, inherent risks are involved with the use of AI and those creating materials through AI, such as advertising agencies (oftentimes at the request of their clients) would be wise to safeguard themselves with appropriate contractual safeguards, and having clear operational frameworks and policies in place on AI use.
A good contract would clearly set out that risks are involved so clients can acknowledge those risks, and both parties can comfortably proceed on a clearly defined basis. It would be reasonable for AI users to be held responsible for using AI systems in accordance with any relevant terms of use and such users must be mindful to not knowingly provide inputs that may infringe applicable laws (such as the Privacy Act 1988 (Cth)) or a third party’s intellectual property rights. However, it would also be a reasonable position for liability arising due to risks or limitations that are inherent to the use of AI systems, to be excluded. Engaging a legal professional to ensure your contracts strike a fair balance of liability between all parties can help facilitate the provision of services and harnessing AI efficiencies, while minimising liability ambiguities.
| Mark Armstrong | Heidi Bruce |
| 02 8935 8809 | 02 8935 8806 |
| [email protected] | [email protected] |
**First published in the March 2026 Talentpay Compliance Review, distributed to targeted industry professionals and examines issues currently affecting the advertising, marketing and entertainment industries.
Related Articles
What our clients say

"When you’re a creative business, you’re always taking risks. Clint and his team’s support gives us the confidence to do work that pushes boundaries. Which makes Anisimoff not just a legal firm, but a valuable partner in the creative process."
Adrian Mills, Co-Founder and CEO

"We’ve had the pleasure of working with Anisimoff for over 16 years, right from the very start of the 31ST journey (and from past agencies). They’ve been more than just legal advisors - they’ve been true partners, always guiding us with wisdom, care, and practical advice. Their professionalism and knowledge are second to none, but what really stands out is how they go above and beyond for us at every turn. On top of that, they’re genuinely great people - approachable, thoughtful, and invested in our success. We feel lucky to have them by our side and can’t recommend them highly enough."
Adele Te Wani, Growth & Relationships Partner

“Clint is the first person we think of when there’s any whiff of risk or need for legal support. His advice over the years has always come from a place of legal expertise, but more importantly from an understanding of the challenges of running a business and as a human. I can honestly say he is the most pragmatic and empathic lawyer we’ve worked with. A rare thing in our experience.”
Angela Smith, CEO

“We’ve worked with Clint and the team at Anisimoff for over a decade, and they are truly trusted and reliable advisors. Their advice is always clear, pragmatic and grounded in a strong understanding of both the law and commercial reality. Their support has been consistently invaluable to Calico’s growth.”
Matt Fenton, Managing Director

“We’ve been working with Anisimoff Legal for over 20 years, and their partnership has been invaluable to Fuel Sydney.
Their team’s thorough understanding of marketing, promotions and compliance gives us total confidence in every piece of work that goes to market. They’re not just legal advisors, they are approachable, trusted collaborators who genuinely understand our industry and the fast pace we operate in.
With the increasing presence of AI, we really value the long-standing relationship and the reliability of being able to pick up the phone and speak to anyone on the team whenever we need”
Sara Roe, Director


Dell Australia
McCann Hero
Millie & More
Mont Marte Int.
smrtr Pty Ltd
TalentPay
Loyalty.com.au Pty Ltd
Their knowledge and expertise is second to none and has allowed us to bring brand new promotional concepts to market time and again.”
PROUD MEMBERS OF


Resources for agencies and brands
We'd love to hear from you!
Please reach out to us below or call our office to speak to one of our team.
Sydney: (02) 9460 6611
Melbourne: (03) 9866 3644
Central Coast: (02) 4331 0400
FAX: (02) 9460 7200




