YOUR BUSINESS AUTHORITY
Artificial intelligence is increasingly becoming a tool in more employee and employer toolboxes. But government agencies are calling for pumping the brakes as we head into this uncharted tech territory.
Just 6 months old, Open AI’s ChatGPT has been used for work-related tasks by 74% of American employees familiar with the technology, according to a recent survey of U.S. adults by The Harris Poll on behalf of Fortune. (Don’t worry, I did not use ChatGPT to write this column.)
Earlier this year, U.S. Equal Employment Opportunity Commissions Chair Charlotte Burrow said at a hearing some 83% of employers, including 99% of Fortune 500 companies, use some form of automated tool as part of their hiring process.
That’s a big cause for concern for the Biden administration, as it contends AI could discriminate against job applicants. New York City has already taken action to mitigate the effects of AI on hiring. In July, it will start enforcing regulations around the use of automated employment decision tools by requiring such tools be subject to bias audits and requiring hirers to inform candidates of their use.
The EEOC last year sued an English-language tutoring service for age discrimination, alleging the employer’s AI algorithm automatically rejected applicants, according to a report from the Society for Human Resources Management. Examples of AI-fueled discrimination are not new. In 2018, Amazon scrapped a recruitment tool fueled by AI after engineers discovered it showed a bias against female candidates. A leaked corporate memo at the end of 2022 suggests the e-commerce giant is trying again as hundreds of recruiters were laid off and their work moved to AI platforms.
A Pew Research Center survey finds Americans are both worried and hopeful about AI’s role in hiring. The majority of those surveyed oppose using AI in final hiring and firing decisions. But nearly half believe AI would do better than a human at fairly evaluating all job applicants.
Perhaps AI does remove a human’s conscious and subconscious bias. But tech has bias, too, because of who it learns from – humans. ChatGPT, for instance, has a data set that includes 300 billion words and 570 gigabytes of data from books and other pieces of writing on the internet.
“ChatGPT is only as good as the data it can pull from,” said Jeffrey L. Bowman, founder and CEO of the tech platform Reframe in New York City, in a SHRM article. “With the case of talent acquisition, there is already a DE&I issue for most companies, and if the ChatGPT data has gaps, it will likely have gaps across race, gender and age.”
Last year, the EEOC shared guidance around AI hiring tools. Resume scanners that prioritize keywords, chatbots that sort candidates based on a set of predefined requirements, and programs that evaluate a candidate’s facial expressions and speech patterns in interviews can perpetuate bias or be discriminatory, the agency found.
AARP senior adviser Heather Tinsley-Fox said at the EEOC hearing earlier this year that companies that scrape data from social media and digital profiles while searching for an ideal candidate may overlook people with smaller difficult footprints, according to an NPR report. Machine learning could also hurt future applicants.
“If an older candidate makes it past the resume screening process but gets confused by or interacts poorly with the chatbot, that data could teach the algorithm that candidates with similar profiles should be ranked lower,” she said.
The federal government has already made steps to guard against discrimination in AI. President Joe Biden signed an executive order that directs federal agencies to root out bias in design and use of new technologies. The Federal Trade Commission, Consumer Financial Protection Bureau, Department of Justice’s Civil Rights Division and the EEOC issued a statement sharing their commitment to leverage existing legal authorities to protect the American people from AI-related harms.
In an article penned in Forbes, IQTalent President David Windley contended AI can both add to and eliminate discrimination in hiring. AI uses machine learning and algorithms to make conclusions. Because it learns from users, it may select future candidates based on past selections and therefore miss candidates who have been historically overlooked. On the flip side, if a recruiter favors a particular university on resumes but the AI has been programmed to look at a wider pool, it could eliminate that human bias.
AI is a powerful tool with the potential to transform how we work and what jobs are even needed. It’s also clear that without regulation, guardrails and auditing, these tools can cause more harm than they’re worth.
This is an area we all must closely watch. Those in charge of hiring platforms and decisions should carefully evaluate internal tech systems and understand the risk.
Alex Meier, an Atlanta-based attorney, told SHRM that humans must remain actively involved in the decision-making process and know what criteria are being used in any technology it deploys.
“Without these measures, a company could find itself trying to defend an employment decision without any ability to explain the basis for its decision,” Meier said. “You can’t go to a judge, jury or arbitrator with, ‘We did what the machine told us to do.’ “
Springfield Business Journal Executive Editor Christine Temple can be reached at firstname.lastname@example.org.
The first downtown Springfield branch for Arvest Bank opened; a longtime licensed massage therapist became a first-time business owner; and 7 Brew Coffee opened its fourth shop in Springfield.