australian robodebt scandal shows the risk of rule by algorithm
“It was just really beyond my comprehension, how they could be asking me for that money,” said Nathan Kearney, 33, who received an automated call from Australia’s welfare agency in 2016 saying he owed them thousands of dollars. This was just the beginning of his five years long ordeal.
What followed next was a series of harassing phone calls from debt collectors. His tax rebate was withheld to pay A$2,000 (RM5,933) worth of arrears that government said he owed. A year later he was sent another debt notice saying he owed them about A$4,000 (RM11,867). He had to move in with his parents to survive this. “It was the only way that I could imagine being able to cope financially,” said Kearney, whose “debts spiralled even higher after he bought countless sessions of therapy to tackle his depression.
Kearney is just one of other 400,000 recipients of welfare in Australia who have been accused of reporting false earnings and have been slapped with huge fines consequently. The errors are due to an automated debt recovery scheme, called as Robodebt, that was established by the former conservative coalition government.
Keep Reading
The algorithm of the said scheme is completely messed up as it “wrongly calculated that the welfare recipients owed money and so issued a ream of debt notices”. During the period of its operations from July 2015 to November 2019, the scheme had used algorithms that calculated the overpayments, “raising more than A$1.7bil (RM5.04bil), which the government was forced to repay or wipe when a court ruled the scheme unlawful in 2019”.
The Labor government announced beginning of an inquiry in August, Government Services Minister Bill Shorten called Robodebt a “shameful chapter in the history of public administration”, and a “massive failure of policy and law”. A Royal Commission, that is a very powerful type of government inquiry, is now investigating the Robodebt scheme.
This is an excellent case of high Artificial Intelligence (AI), that is increasingly being used by governments across the world to process claims into welfare schemes, is actually not error-proof. “Such schemes are inherently problematic, and often fail because they lack all humanity,” said Tapani Rinta-Kahila, a professor at the University of Queensland who studies the use of AI in the public sector. “No matter how good an algorithm you have, no matter how sophisticated it is, it still cannot address fundamental human issues as sensitive as welfare,” he said.