
By Matt Hansen and Emma Farncomb
With the aim of establishing greater consistency, safety and responsibility for organisations when using artificial intelligence (“AI”) technology, the Australian Government (with the assistance of the National AI Centre) has recently released two new documents:
- The Voluntary AI Safety Standard (“Voluntary Standard”); and
- The High-Risk AI Paper (a proposal for introducing mandatory guardrails for AI in high-risk settings (“Proposal”)).
These publications follow the Government’s latest interim response to the Safe and Responsible AI in Australia discussion paper (issued January 2024), which highlighted the need for clearer guidance so that organisations can navigate the rapidly changing AI landscape and meet existing international ethical and responsible AI standards. As a human-centred approach to AI development and deployment, the Voluntary Standard provides a roadmap for organisations to ensure they can harness the true value of AI technology while also adopting it in a reliable, transparent and safe manner that upholds public expectations.
What is Australia’s Voluntary AI Safety Standard?
The Voluntary Standard consists of 10 guardrails that provide practical guidance to Australian organisations on how to safely and responsibly deploy and develop AI. The Voluntary Standard covers various aspects of AI governance, such as testing, transparency, accountability, risk management in data governance and human oversight and will operate as an interim measure, strengthening and complementing existing legal developments surrounding AI regulation, while foreshadowing proposed mandatory guardrails for high-risk settings that are anticipated to come into effect, if approved, in 2025.
Although the Voluntary Standard does not impose any legal obligation, organisations are encouraged to adopt these AI governance and ethical practices as they cultivate trust and confidence amongst stakeholders. It will also position organisations as leaders in responsible and innovative use of AI and act as a method of preparation to minimise the impact on operations once the mandatory guardrails are implemented.
Does the Voluntary Standard apply to my business?
The Voluntary Standard applies to all organisations across the AI supply chain, more specifically:
a) AI developers (designers, developers or businesses that test and provide AI technology); and
b) AI deployers (suppliers and users of AI systems to provide products or services):
While the first version of the Voluntary Standard focuses on organisations that deploy AI systems, it has been indicated that the next version will extend upon technical practices and guidance for developers of AI systems.
The 10 Guardrails:
- Accountability: Establish, implement and publish an accountability process including governance, internal capability and strategy for regulatory compliance. This accountability process will sit at the organisational level and include implementing policies for data and risk management, allocating clear roles and responsibilities for staff and detailing specific information about any training provided to staff who are key in overseeing AI operations.
- Risk Management: Establish and implement a risk management process to identify and mitigate risks. This will include implementing strategies specific to the AI system being used by the organisation and conducting regular assessments throughout the lifecycle of the AI system, considering both the technical risks and any societal impacts. Preventative processes that identify new risks and monitor the effectiveness of existing risk mitigation will be important for those using high-risk AI systems.
- Data Governance: Protect AI systems and implement data governance measures to manage data quality and provenance. This requires organisations to ensure all data used in operating an AI system is fit for purpose by implementing appropriate data governance, privacy and cyber security measures. Organisations must account for the unique characteristics of their AI system and ensure their data quality is free from biases and all data is legally obtained.
- Testing and Monitoring: Test AI models and systems to evaluate model performance and monitor the system once deployed. Organisations will be required to undertake consistent testing to monitor all potential changes in performance metrics or any unintended consequences or behaviour changes that may occur such as accidental bias or copyright infringement. Although the tests will vary depending on the AI system, in all cases, the metrics used must be capable of identifying foreseeable risks.
- Human Oversight: Enable human control or intervention in an AI system to achieve meaningful human oversight. Throughout the lifecycle of the AI system, organisations must ensure they have real-time human involvement in the development and operation of AI systems. Meaningful human oversight will mitigate and reduce unintended harms.
- Transparency: Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.With the aim of cultivating trust, organisations will be required to communicate clearly when an AI system is used and how it affects individuals. Methods of disclosure that may be adopted include content labelling and watermarking of AI-generated outputs.
- Contestability: Establish processes for people impacted by AI systems to challenge use or outcomes.Organisations must ensure that individuals have an internal complaint-handling system as well as adequate human oversight to ensure they can adequately address any contested AI outcomes.
- Supply chain Transparency: Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks. This guardrail not only requires developers to provide deployers with all the necessary information relating to the AI system used but also requires developers to provide feedback on the AI system and how it is operating in practice to developers. This implements ongoing monitorisation with the aim of reducing the opaque nature of AI systems.
- Record Keeping: Keep and maintain records to allow third parties to assess compliance with guardrails. Organisations will be required to maintain AI inventory and documentation, such as a general description of the AI system, design specifications, a description of the database and any details surrounding the capabilities of the AI system.
- Engage stakeholders: Engage stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness. This will specifically require deployers of AI to ensure they are engaging with stakeholders so that they can effectively identify any harm and prevent unwanted biases.
The above guardrails 1-9 will likely be implemented into the proposed mandatory guardrails set out in the High-Risk AI Paper. Within the Proposal, guardrail 10 requires businesses to undertake conformity assessments to demonstrate and certify compliance with all 9 guardrails, promoting internal business accountability and quality assurance.
Next Steps:
While we await the Government’s response to the submissions and feedback on the High-Risk AI Paper (submissions closed 4 October), it is encouraged that organisations embrace these new standards and prepare for the up-and-coming changes. Aligning with these ethical and responsible AI practices will not only position businesses as leaders in responsible and safe users of AI but also will support compliance with new privacy law obligations surrounding AI, consumer protections and corporate governance expectations. Within an era where AI is constantly evolving, implementing effective mechanisms that mitigate the risks and harms of AI will be pivotal for business growth and operational consistency which in turn builds trust with customers.
If you require further guidance on the Voluntary Standard, or require general assistance regarding the use of AI in your organisation, please contact one of our experts below:
Matt Hansen |
+61 2 8935 8803 |
[email protected] |
Co-authored by
Emma Farncomb |
+61 2 4331 0406 |
[email protected] |
Related Articles
What our clients say
"When you’re a creative business, you’re always taking risks. Clint and his team’s support gives us the confidence to do work that pushes boundaries. Which makes Anisimoff not just a legal firm, but a valuable partner in the creative process."
Adrian Mills, Co-Founder and CEO
"We’ve had the pleasure of working with Anisimoff for over 16 years, right from the very start of the 31ST journey (and from past agencies). They’ve been more than just legal advisors - they’ve been true partners, always guiding us with wisdom, care, and practical advice. Their professionalism and knowledge are second to none, but what really stands out is how they go above and beyond for us at every turn. On top of that, they’re genuinely great people - approachable, thoughtful, and invested in our success. We feel lucky to have them by our side and can’t recommend them highly enough."
Adele Te Wani, Growth & Relationships Partner
“Clint is the first person we think of when there’s any whiff of risk or need for legal support. His advice over the years has always come from a place of legal expertise, but more importantly from an understanding of the challenges of running a business and as a human. I can honestly say he is the most pragmatic and empathic lawyer we’ve worked with. A rare thing in our experience.”
Angela Smith, CEO
“We’ve been working with Anisimoff Legal for over 20 years, and their partnership has been invaluable to Fuel Sydney.
Their team’s thorough understanding of marketing, promotions and compliance gives us total confidence in every piece of work that goes to market. They’re not just legal advisors, they are approachable, trusted collaborators who genuinely understand our industry and the fast pace we operate in.
With the increasing presence of AI, we really value the long-standing relationship and the reliability of being able to pick up the phone and speak to anyone on the team whenever we need”
Sara Roe, Director
“We’ve worked with Clint and the team at Anisimoff for over a decade, and they are truly trusted and reliable advisors. Their advice is always clear, pragmatic and grounded in a strong understanding of both the law and commercial reality. Their support has been consistently invaluable to Calico’s growth.”
Matt Fenton, Managing Director
Dell Australia
McCann Hero
Millie & More
Mont Marte Int.
smrtr Pty Ltd
TalentPay
Loyalty.com.au Pty Ltd
Their knowledge and expertise is second to none and has allowed us to bring brand new promotional concepts to market time and again.”
PROUD MEMBERS OF



Resources for agencies and brands
We'd love to hear from you!
Please reach out to us below or call our office to speak to one of our team.
Sydney: (02) 9460 6611
Melbourne: (03) 9866 3644
Central Coast: (02) 4331 0400
FAX: (02) 9460 7200