top of page

Australian Government's Guidance for Safe & Responsible AI




Projections indicate that AI and automation could contribute an additional $170-$600billion annually to Australia's GDP by 2030. However, despite the potential economic benefits, there's a prevailing concern regarding public trust in the safe and responsible design, development, deployment, and utilisation of AI systems.


Recent surveys have shown that only one-third of Australians believe the nation possesses adequate measures to ensure the safe implementation of AI technologies. This has catalysed the government's commitment to addressing the issue through various measures.


Internationally, several jurisdictions have taken proactive steps in AI governance.


The United States has sought voluntary commitments from leading AI companies, while Singapore has introduced standardized self-testing tools, known as 'AI Verify,' to enable businesses to assess AI models against established principles. Similarly, Canada and the European Union are moving towards mandatory commitments for higher-risk AI systems via new legal frameworks.


In response to identified gaps in current legislation that may not sufficiently prevent harms from AI deployment in high-risk contexts, The Australian Government is contemplating mandatory safeguards for developers and deployers of AI systems in these settings. This includes critical infrastructure, medical devices, education, law enforcement, and biometric identification systems, among others.


The government aims to develop collaborative and transparent approaches to implementing safety guardrails, considering both amendments to existing laws and the possibility of a dedicated legislative framework. Additionally, the government plans to work with industry to establish a voluntary AI Safety Standard and explore options for labelling and watermarking AI-generated materials.


Further actions will focus on preventing harms through testing, transparency, and accountability; clarifying and strengthening laws to safeguard citizens; working internationally to support safe AI development and deployment; and maximizing the benefits of AI through national capability development and adoption initiatives.


Overall, the Australian Government's interim response underscores its commitment to fostering a safe and responsible AI ecosystem, guided by principles of risk-based, collaborative, and transparent governance, with a strong emphasis on community well-being and international cooperation.


Want more key insights on the world of AI? 


Read last week’s blog on some AI learnings I had in 2023 through discussing and working with clients on the topic.

Comments


bottom of page