From chatbots to self-driving vehicles, Artificial Intelligence (AI) technology has the potential to revolutionize how businesses, governments, and individuals interact with machines. Despite its potential for game-changing advances, AI raises important questions about employment discrimination.
Nowadays, employers have unprecedented access to information about job applicants through data pipelines like social media networks and corporate algorithms that parse applications. Here we explore these queries by discussing the ethical ramifications of using AI during the recruitment process – and outline some working solutions for prevention.
AI could be the potential successor in hiring methods, yet it could bear all of the same remnants of prejudice as in times past.
The US agency that enforces federal anti-discrimination laws is exploring critical questions associated with adopting artificial intelligence and automation by nearly all major employers in the nation: How can discrimination in recruitment be avoided if a machine is causing it? What safeguards could be put in place to prevent such a situation?
The Equal Employment Opportunity Commission’s (EEOC) Chair Charlotte Burrows mentioned that during a Tuesday hearing labeled ‘____,’ an impressive 99% of Fortune 500 companies are among the 83% of employers who use automated tools as part of their hiring process.
The widespread usage of technology in recruitment and hiring is the focus of a strategic agency initiative. All people involved must vocalize their opinions on how such technology should be employed.
“Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier.”
She continues to say:
“The stakes are simply too high to leave this topic just to the experts.”
Automate Your Hiring Process With Resume Scanners, Chatbots, And Video Interviews
EEOC guidance on cutting-edge hiring tools has outlined the drawbacks associated with their implementation in the workforce. Last year, the EEOC issued such advice, highlighting numerous shortcomings.
Candidates’ facial expressions, speech patterns in video interviews, and résumés containing keywords can all be evaluated electronically via virtual assistants or “chatbots” and scanners programmed to prioritize certain words; however, the possible discrimination potential these algorithms can create was identified by the agency as an important issue.
Their speech patterns can determine an applicant’s score in problem-solving ability in a video interview. For instance, an individual with a speech impediment might be automatically ruled out if their score is low.
The qualified candidate who had to take time off from work due to a disability or the birth of a child could be automatically rejected by a chatbot programmed to deny employment opportunities to those with gaps in their resumes.
Heather Tinsley-Fix, the senior advisor for AARP, voiced concerns during the hearing that AI-based tools may disadvantage older workers in numerous ways.
Scraping data from social media and digital profiles with algorithms may be detrimental to candidates with small digital footprints, as they may not appear as the “ideal” choice according to such searches.
Machine learning can potentially develop a detrimental cycle regarding potential applicants. This cycle occurs when the feedback given to previous applicants, or lack thereof, affects what and how future applicants are asked or judged.
She went on to say:
“If an older candidate makes it past the resume screening process but gets confused by or interacts poorly with the chatbot, that data could teach the algorithm that candidates with similar profiles should be ranked lower.”
AI will have a role in employment discrimination, but what that role looks like is currently up for debate. The most important thing we can do right now is to be aware of the potential for bias and keep an open dialogue about how to address it.
These conversations will become even more important as AI becomes more prevalent in hiring. Have you been affected by AI-based discrimination? How do you think we should address this issue?