National AI Centre Gets Funding Boost: AI Safety Measures in Place to Mitigate Risk

The Australian government has allocated $39 million (US$26 million) to mitigate the risks posed by artificial intelligence technology across businesses and infrastructure, as it becomes more prevalent in society and exposes governments to national security risks.

Under the Future Made in Australia plan, this funding aims to establish standards for responsible AI use and ensure that it is secure, fair, accessible, and non-discriminatory.

Over the past two years, the government’s focus has been on AI’s potential within the tech sector, with $102 million set aside in the 2023/24 budget for integrating quantum technology and adopting AI technologies.

Moreover, over $21 million has been allocated to transform and expand the National AI Centre, shifting it from CSIRO to the Department of Industry, Science, and Resources (DISR.

The National AI advisory group, consisting of 12 experts under the DISR portfolio, will continue its work over the next four years.

The group’s main task is identifying high-risk uses of AI and recommending implementing restrictions.

They are also developing new standards for watermarking, which involves embedding a unique signal into an AI model’s output to identify it as AI-generated.

This enhances creator transparency.

Another $15 million has been earmarked for removing restrictions on AI use in areas such as healthcare, consumer law, and copyright issues.

In addition, $2.6 million has been allocated over three years to combat national security risks posed by AI.

AVEVA Pacific Vice-President Alexey Lebedev told AAP that the tech industry needs the rules set down by the National AI Centre to be straightforward for effective implementation.

He believes Australia’s budget can provide clear direction and certainty on AI while helping cement an industry there, with potential contributions to the Australian economy reaching $315 billion by 2028.

The Department of Science, Industry, and Resources emphasised in its published budget announcement that this funding commitment aims to combat AI’s potential to “cause harm, without appropriate regulation to ensure the use of AI is safe and responsible, secure, fair, accessible, and does not discriminate.

Mirroring global trends, AI in Australia has been experiencing steady growth in key sectors such as health, which uses AI for improved patient care, diagnostics, and treatment planning.

In the financial sector, AI is applied for fraud detection, risk management, and customer service, with chatbots becoming more common in banking and insurance.

Agriculture also benefits from AI applications like precision farming, crop monitoring, and predictive analytics, which help farmers make data-driven decisions to improve crop yields and reduce waste.

In the defence sector, AI is used for threat detection, surveillance, and tactical decision-making.

As AI continues to gain prominence, developers must be transparent about their algorithms’ functioning and decision-making processes.

They should also ensure fairness, absence of bias, and non-discrimination in these algorithms.

Furthermore, data privacy and security are crucial aspects of safeguarding AI use, with the need to protect data from unauthorised access and use by implementing measures that comply with relevant laws and regulations.

Leave a Reply

Your email address will not be published. Required fields are marked *