AI Apps on Screen of Mobile Phone

By Matt Hansen and Emma Farncomb

With the aim of establishing greater consistency, safety and responsibility for organisations when using artificial intelligence (“AI”) technology, the Australian Government (with the assistance of the National AI Centre) has recently released two new documents:

  • The Voluntary AI Safety Standard (“Voluntary Standard”); and
  • The High-Risk AI Paper (a proposal for introducing mandatory guardrails for AI in high-risk settings (“Proposal”)).

These publications follow the Government’s latest interim response to the Safe and Responsible AI in Australia discussion paper (issued January 2024), which highlighted the need for clearer guidance so that organisations can navigate the rapidly changing AI landscape and meet existing international ethical and responsible AI standards. As a human-centred approach to AI development and deployment, the Voluntary Standard provides a roadmap for organisations to ensure they can harness the true value of AI technology while also adopting it in a reliable, transparent and safe manner that upholds public expectations.

What is Australia’s Voluntary AI Safety Standard?

The Voluntary Standard consists of 10 guardrails that provide practical guidance to Australian organisations on how to safely and responsibly deploy and develop AI. The Voluntary Standard covers various aspects of AI governance, such as testing, transparency, accountability, risk management in data governance and human oversight and will operate as an interim measure, strengthening and complementing existing legal developments surrounding AI regulation, while foreshadowing proposed mandatory guardrails for high-risk settings that are anticipated to come into effect, if approved, in 2025.

Although the Voluntary Standard does not impose any legal obligation, organisations are encouraged to adopt these AI governance and ethical practices as they cultivate trust and confidence amongst stakeholders. It will also position organisations as leaders in responsible and innovative use of AI and act as a method of preparation to minimise the impact on operations once the mandatory guardrails are implemented.

Does the Voluntary Standard apply to my business?

The Voluntary Standard applies to all organisations across the AI supply chain, more specifically:

       a) AI developers (designers, developers or businesses that test and provide AI technology); and

      b) AI deployers (suppliers and users of AI systems to provide products or services):

While the first version of the Voluntary Standard focuses on organisations that deploy AI systems, it has been indicated that the next version will extend upon technical practices and guidance for developers of AI systems.

The 10 Guardrails:

  1. Accountability: Establish, implement and publish an accountability process including governance, internal capability and strategy for regulatory compliance. This accountability process will sit at the organisational level and include implementing policies for data and risk management, allocating clear roles and responsibilities for staff and detailing specific information about any training provided to staff who are key in overseeing AI operations.
  1. Risk Management: Establish and implement a risk management process to identify and mitigate risks. This will include implementing strategies specific to the AI system being used by the organisation and conducting regular assessments throughout the lifecycle of the AI system, considering both the technical risks and any societal impacts. Preventative processes that identify new risks and monitor the effectiveness of existing risk mitigation will be important for those using high-risk AI systems.
  1. Data Governance: Protect AI systems and implement data governance measures to manage data quality and provenance. This requires organisations to ensure all data used in operating an AI system is fit for purpose by implementing appropriate data governance, privacy and cyber security measures. Organisations must account for the unique characteristics of their AI system and ensure their data quality is free from biases and all data is legally obtained.
  1. Testing and Monitoring: Test AI models and systems to evaluate model performance and monitor the system once deployed. Organisations will be required to undertake consistent testing to monitor all potential changes in performance metrics or any unintended consequences or behaviour changes that may occur such as accidental bias or copyright infringement. Although the tests will vary depending on the AI system, in all cases, the metrics used must be capable of identifying foreseeable risks.
  1. Human Oversight: Enable human control or intervention in an AI system to achieve meaningful human oversight. Throughout the lifecycle of the AI system, organisations must ensure they have real-time human involvement in the development and operation of AI systems. Meaningful human oversight will mitigate and reduce unintended harms.
  1. Transparency: Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.With the aim of cultivating trust, organisations will be required to communicate clearly when an AI system is used and how it affects individuals. Methods of disclosure that may be adopted include content labelling and watermarking of AI-generated outputs.
  1. Contestability: Establish processes for people impacted by AI systems to challenge use or outcomes.Organisations must ensure that individuals have an internal complaint-handling system as well as adequate human oversight to ensure they can adequately address any contested AI outcomes.
  1. Supply chain Transparency: Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks. This guardrail not only requires developers to provide deployers with all the necessary information relating to the AI system used but also requires developers to provide feedback on the AI system and how it is operating in practice to developers. This implements ongoing monitorisation with the aim of reducing the opaque nature of AI systems.
  1. Record Keeping: Keep and maintain records to allow third parties to assess compliance with guardrails. Organisations will be required to maintain AI inventory and documentation, such as a general description of the AI system, design specifications, a description of the database and any details surrounding the capabilities of the AI system.
  1. Engage stakeholders: Engage stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness. This will specifically require deployers of AI to ensure they are engaging with stakeholders so that they can effectively identify any harm and prevent unwanted biases.

The above guardrails 1-9 will likely be implemented into the proposed mandatory guardrails set out in the High-Risk AI Paper. Within the Proposal, guardrail 10 requires businesses to undertake conformity assessments to demonstrate and certify compliance with all 9 guardrails, promoting internal business accountability and quality assurance.

Next Steps:

While we await the Government’s response to the submissions and feedback on the High-Risk AI Paper (submissions closed 4 October), it is encouraged that organisations embrace these new standards and prepare for the up-and-coming changes. Aligning with these ethical and responsible AI practices will not only position businesses as leaders in responsible and safe users of AI but also will support compliance with new privacy law obligations surrounding AI, consumer protections and corporate governance expectations. Within an era where AI is constantly evolving, implementing effective mechanisms that mitigate the risks and harms of AI will be pivotal for business growth and operational consistency which in turn builds trust with customers.

If you require further guidance on the Voluntary Standard, or require general assistance regarding the use of AI in your organisation, please contact one of our experts below:

Matt Hansen
+61 2 8935 8803
 [email protected]

 

Co-authored by

Emma Farncomb
+61 2 4331 0406
[email protected]

Ready to claim your competitive advantage?

Sign up for our Agency Health Check and get a clear pathway for improving your agency or brand and claiming your competitive advantage.

Related Articles

  • Read More
  • Read More
  • Read More

What our clients say

PROUD MEMBERS OF

Resources for agencies and brands

  • AI Apps on Screen of Mobile Phone
    Read More
  • Read More
  • Read More

We'd love to hear from you!

Please reach out to us below or call our office to speak to one of our team.

Sydney: (02) 9460 6611
Melbourne: (03) 9866 3644
Central Coast: (02) 4331 0400
FAX: (02) 9460 7200