The world is witnessing a variety of strategies to ensure ethical and trustworthy AI, with the EU leading the way with its groundbreaking trio of laws that aim to regulate Ethical AI and algorithms: the EU AI Act, Digital Services Act, and Digital Markets Act.
The US is also taking action at different levels of government, especially in HR Tech and insurtech, while the UK is opting for a more flexible approach through white papers. China is still catching up, as it has enacted several laws to control AI, focusing on generative AI.
Brazil has proposed four laws, but they have yet to make it out of Congress. On the other hand, Australia has a less developed ecosystem, with only two publications (Australia’s Ethical AI Framework Discussion Paper) from the Australian Government to guide the regulatory landscape.
“If You want to Remove AI Detection and Bypass AI Detectors Use Undetectable AI: It can do it in one click“
Australia’s Ethical AI Framework Discussion Paper
In 2019, the Australian Government’s Department of Industry, Innovation, and Science published a discussion paper on Australia’s AI Ethics Framework, inviting public feedback on Australia’s eight essential principles for ethical and responsible AI. These are:
- Generates net benefits – AI systems should create more value than cost for society and the environment
- Do no harm – AI systems should not harm humans or other living beings and should reduce negative impacts as much as possible.
- Regulatory and legal compliance – AI systems should respect and follow all applicable local, state, and federal laws, regulations, and duties
- Privacy protection – AI systems should safeguard personal data and prevent unauthorized access or leakage
- Fairness – AI systems should not cause unfair discrimination and should use unbiased data for training
- Transparency and explainability – Users should be aware of the presence and purpose of an algorithm and how it reaches its decisions.
- Contestability – Users should have an effective way to question and dispute the use or outcome of an algorithm
- Accountability – Those who develop and deploy AI systems should be clearly identifiable and accountable for their impacts
The discussion paper asks eight questions about the principles, tools, and best practices needed for responsible AI adoption in Australia, acknowledging the efforts of other countries and organisations such as Google and Microsoft that have issued ethical guidelines on AI.
The discussion paper also emphasises the role of human oversight in ensuring accountability and reducing harm. It aims to prevent societal harm and foster innovation to reap the social benefits of AI, and it also highlights the need for society in the loop, where the end users of the technologies are properly involved in the design and development stages to make sure that the frameworks are practical and effective in the real world.
The discussion paper cites several scandals and harms caused by unethical AI, such as Amazon’s abandoned recruitment tool and Northpointe’s COMPAS recidivism tool. Therefore, the discussion paper suggests a toolkit to avoid these risks based on nine practices:
- Best practice guidelines to assist AI developers and users across industries
- Collaboration to encourage and reward partnerships between industry and academics to support the development of ethical AI by design and foster diversity in AI development
- Consultation with the public and specialists to ensure stakeholder views are considered.
- Impact assessments to evaluate the potential impacts of AI, including negative impacts on individuals, communities and groups, and to inform mitigation strategies.
- Industry standards to facilitate the implementation of ethical AI, including educational guides, training programs, and possibly certification
- Internal or external reviews to check their compliance with ethical principles and Australian policies and laws
- Mechanisms for monitoring and improvement of AI systems for accuracy, fairness, and sustainability
- Recourse mechanisms enable appeal processes when an algorithm has a negative impact
- Risk assessments to categorise systems by the level of risk involved in their deployment or use
The Australian government must still turn these principles and tools into regulatory or legal obligations.
Australia’s AI Action Plan
In June 2021, the Australian Government published and archived its AI Action Plan, which outlines its vision to make Australia a world leader in secure, trusted, and responsible AI. The action plan suggests a mix of new and existing initiatives to accomplish this, such as direct AI measures, programs and incentives to boost technological development, and foundational policies to assist businesses, innovation, and the economy. The plan aims to do this through four main focus areas:
- Creating an environment to attract AI talent to ensure businesses have the skills they need
- Developing and adopting AI to transform businesses in Australia by creating jobs and increasing productivity
- Making Australia a global leader in responsible and inclusive AI that aligns with Australian values. The plan explains how the three initiatives – direct AI measures, programs and incentives, and foundational policies – can support these focus areas.
- Using cutting-edge AI to address national challenges and ensure that all Australians can benefit from AI
Legal action against AI in Australia
The AI Action Plan is now archived, which means it is unlikely to result in any AI-specific regulation or legislation and will only be a part of Australia’s Digital Economy Strategy. However, Australia still needs to be committed to responsible AI, and existing laws do not apply to AI.
In fact, the Australian Government itself has faced the consequences of its faulty automated debt recovery tool, robodebt. In September 2019, Gordon Legal, a law firm based in Melbourne, launched a class action lawsuit for clients whose government payments were unfairly cut or taken away due to false claims by the tool that Australian citizens had underreported their income between July 2015 and November 2019.
The class action covers about 648,000 group members against the Commonwealth. The class action was settled in September 2022, when the Australian government agreed to pay $112 million in compensation to about 400,000 eligible individuals, including legal costs.
It also refunded more than $751 million to citizens affected by debt collection triggered by the tool and agreed to cancel repayment demands for $744 million in invalid debts that had been partly repaid and $258 million in invalid debts that had not been repaid. Over $1.7 billion has been paid back to about 430,00 members.
Conclusion
In conclusion, this data has explored the different approaches to responsible AI taken by various countries and organisations worldwide, focusing on Australia. It has discussed the AI Ethics Framework and the AI Action Plan published by the Australian Government, as well as the principles, tools, and best practices they propose for ethical and trustworthy AI development and adoption.
It has also highlighted the case of robodebt, a notorious example of unethical AI that resulted in a class action lawsuit and a massive payout by the Australian Government. This data has shown that Australia is aware of the importance of responsible AI but has yet to implement any AI-specific regulation or legislation.
It has also suggested that existing laws can be applied to AI, but more is needed to address the complex and dynamic challenges AI poses. Therefore, this data recommends that Australia continue to engage with the public and the stakeholders and learn from the experiences of other countries and organisations to develop a robust and comprehensive framework for secure, trusted, and responsible AI.
Resources:
- Australia’s Artificial Intelligence Ethics Framework | Department of Industry, Science and Resources
- Australia’s Artificial Intelligence Action Plan | Department of Industry, Science and Resources
- https://acola.org/acola-submission-data61-discussion-paper-ai-australias-ethical-framework/