Did you know a job recruiter is behind every online application, even if it doesn’t appear human? A new form of Predictive recruiting powered by Artificial Intelligence (AI) uses algorithms to analyze your resume and select the most qualified candidates. In addition to the standard hiring process, a robotic algorithm may evaluate whether you fit the job. Are AI-driven recruiters the future of hiring? Join us as we explore this exciting emerging technology and how it could revolutionize our conception of who gets hired and why.
A box is deciding for you without your knowledge; even if you could see what was inside, it would give little information. These “black boxes” are ubiquitous when it comes to choices being made about our digital and real-world lives.
Not all are harmless; some decide the news you read, who you go out with, and even how much credit you can get. On the other hand, some are more innocuous, such as suggesting which film to watch next.
Determining whether someone gets a job, is suspected of shoplifting, or should take a certain route to the shops are only some of how algorithms are used. They have even been deployed in more serious situations to forecast teenage pregnancies and reduce welfare benefits for individuals with disabilities.
All these boxes are full of algorithms, the powerful machines being used increasingly in matters affecting us all.
It is impossible to understand exactly what these algorithms are doing, let alone keep them in check.
Despite creative and effective methods to gain insight into these mysterious areas, it can only be done if corporations and governments allow us to do so.
The Robodebt algorithm of Centrelink was hidden away in a ‘black box,’ out of sight from the public. It carried out its duties and sent hundreds of thousands of inaccurate debt notices built on an incorrect assumption.
The algorithm distributed the total income of welfare recipients over all 26 two-week periods of the year rather than evaluating each separately. A seemingly common mistake, if replicated in secrecy, had far-reaching implications that were anything but mundane. One submission to a Senate inquiry stated they were “literally crushed” and in “shock.”
This narrative illustrates the experience of one of the roughly 433,000 Australians affected by computer-generated debt.
“I walked around my house trying to deny the reality of what had happened … I was confused as to how I owed this amount of money. Within weeks, I began receiving calls, texts and letters from a debt-collection agency.”
After the legal challenge was victorious and there was widespread media coverage, the government instituted a royal commission to examine the deficiencies of the scheme. Despite exposing one algorithm, others remain hidden and operate behind closed doors.
The Department of Home Affairs has used algorithms to assist visa processing for over 20 years. And, with increasing demand for visas since our borders re-opened, this is set to expand further.
The spokesperson from Home Affairs has acknowledged that they are currently reflecting ” A range of emerging technologies” as part of a “modernisation” strategy.
Even though the Robodebt crisis has taught us valuable lessons, the inner workings of Home Affairs’ algorithmic systems remain shrouded in secrecy. The ABC requested information on the transparency, monitoring, testing, and redress available for these systems but was not responded to.
Weighing the risks and benefits of implementing AI in our lives is crucial to ensure we maintain control over technology rather than the other way around. What are your thoughts on this risk? Are you for or against AI implementation in certain areas of life, such as health care? Please share your opinion with us in the comments below. As always, thanks for reading!