Age of AI & Algocracy – Anti-corruption &

Anti-corruption (and mismanagement) in the Age of AI and Algocracy – ‘She won’t be right mate’, 8 February 2024

Guest editorial by Associate Professor Guzyal Hill, PhD (Law), Charles Darwin University

Image: supplied.

We are seeing the rapid emergence of what John Danaher has called an ‘Algocracy’ – the rise of algorithms and Artificial Intelligence (AI) in making decisions in both public and private sectors. This ups the ante for enhancing the transparency and accountability which underpin contemporary democratic socio-economic and legal structures, at a time when these structures are facing unprecedented challenges.  

 The Robodebt Royal Commission has exposed the dire consequences of misuse of technology, and the algorithm used was not even AI. The Robodebt fiasco might pale in comparison to the scale and significance of potential corruption and mismanagement with more widespread availability and application of AI.  

Navigating adoption-non adoption, private-public and use-misuse of AI

While at first, the adoption or non-adoption of AI seems to be the decision of an individual, ultimately, the decision is not ours to make. The major corporations that are heavily investing in AI – Apple, Microsoft, Alphabet (Google) – have products that are already embedded in our personal and professional lives as public servants. Additionally, the vast majority of computer products are developed by private sector under a contract or tender.  

Traditionally, under these tenders, the development of an algorithm is linked to the requirement of transparency and accountability. AI, as we know, includes Blackbox processes that can be unexplainable even to developers. So, the requirement of transparency and accountability could still be posed but becomes difficult to achieve.  

Danaher concludes that the ultimate threat is that humans could introduce and ‘defer to more and more algocratic systems, starting with ones that are relatively easy to follow, but which morph into systems that are far more complex and outside the upper limits of human reason’. As humans progress more systems will be more difficult to audit and regulate, limiting the ability for comprehension and participation, leading to the situation where we are governed by ‘a set of decision-making procedures that are depleted of their legitimacy’. 

AI is often positioned as a helpful tool for improving lives. This tool cannot be likened to a calculator. All processes in a calculator are explainable. AI can be likened to social media as a tool – a multi-dimensional, multi-player, multi-level, complex adaptive system that is capable of weaponisation. Facebook was great to find long lost friends and send convenient invitations to a party, until its data was used by Cambridge Analytica. Twitter (now X) was fantastic to share ideas in short passages until a former employee started accepting bribes in exchange for private messages. 

Approach to AI regulation and governance

Some companies investing in AI are arguing against regulation, stating that regulation stifles innovation. It is simply too early to regulate. Others, call for urgent regulation to avoid harmful consequences and even propose a moratorium on further development of AI. Often the arguments are made by representatives of advocacy coalitions that have strong financial or other interests.  

In Australia, the valid argument was made that the modern regulatory landscape is sufficient to encompass regulation of AI through director duties, tort of negligence, standards and other existing laws. Although, few will say that Australia has achieved the right balance of regulation balancing cyber-security and privacy in the digital age.

On 17 January 2024, the government released its Interim Response to the 2023 “Safe and Responsible AI in Australia” consultation. The government will: 

  • introduce ‘further guardrails on legitimate but high-risk uses of AI, as well as unforeseen risks from powerful ‘frontier’ models’. 
  • seek to regulate AI. It is not decided whether it will be achieved through standalone legislation or amendments to existing laws. 
  • take a risk-based, technology-neutral approach to regulating AI.

 

AI-developing and AI-applying corporations are also contending with the governance challenges posed by AI. Ultimately, governance of AI relies on data and information management within a corporation, as well as rigorous governance processes. Very few organisations could vouch for full robustness of these.   

Approach of individual agencies and public servants – key takeaways and recommendations 

One of the ways to balance the decisions in adopting and using AI by the government is the application of the Automated Decision-making – Better Practice Guide prepared by the Commonwealth Ombudsman. 

  • The UK recommends inquisitive but cautious approach to AI by public servants The UK government warns against use of personal and sensitive information and provides example of safe and not safe use of generative AI. It is difficult to argue against this approach as it is more difficult to regulate something you do not know how to use.  
  • As highlighted by Robodebt Royal commission and Human Rights Law Centre – effective and accessible whistleblower protections are the key to strengthening public trust in both public and private sectors.  
  • Accepting that ‘it takes a village to regulate AI” is the key. AI transcends physical borders. It’s impossible to close the border between SA and NT to stop proliferation of AI. Therefore, it is important to consider national uniform legislation in regulating AI to afford equal protection for Australians irrespective of their geographical location in the federation.  

 

AI transcends disciplinary and agency borders. There is no single expert who knows it all. In the words of ASIC’s Chair, Joe Longo there is a question we all have to ask ‘is this enough?’.