There’s a lot to chew on in this short article (ht @ajsadauskas):
https://www.bbc.com/worklife/article/20240214-ai-recruiting-hiring-software-bias-discrimination
“An AI resume screener…trained on CVs of employees already at the firm” gave candidates extra marks if they listed male-associated sports, and downgraded female-associated sports.
Bias like this is enraging, but completely unsurprising to anybody who knows half a thing about how machine learning works. Which apparently doesn’t include a lot of execs and HR folks.
1/
Years ago, before the current AI craze, I helped a student prepare a talk on an AI project. Her team asked whether it’s possible to distinguish rooms with positive vs. negative affect — “That place is so nice / so depressing” — using the room’s color palette alone.
They gathered various photos of rooms on campus, and manually tagged them as having positive or negative affect. They wrote software to extract color palettes. And they trained an ML system on that dataset.
2/
Guess what? Their software succeeded!…at identifying photos taken by Macalester’s admissions dept.
It turns out that all the publicity photos, massaged and prepped for recruiting material, had more vivid colors than the photos they took. And they’d mostly used publicity photos for the “happy” rooms and their own photos for the “sad” rooms (which generally aren’t in publicity materials).
They’d encoded a bias in their dataset, and machine learning dutifully picked up the pattern.
Oops.
3/
The student had a dilemma: she had to present her research, but the results sucked! the project failed! she was embarrassed! Should she try to fix it at the last minute?? Rush a totally different project?!?
I nipped that in the bud. “You have a •great• presentation here.” Failure is fascinating. Bad results are fascinating. And people •need• to understand how these AI / ML systems break.
4/
She dutifully gave the talk on the project as is, complete with the rug pull at the end: “Here’s our results! They’re so broken! Look, it learned the bias in our dataset! Surprise!“ It got an audible reaction from the audience. People •loved• her talk.
I wish there had been some HR folks at her talk.
Train an AI on your discriminatory hiring practices, and guess what it learns? That should be a rhetorical question, but I’ll spell it out: it learns how to infer the gender of applicants.
5/
An interesting angle I’m sure someone is studying properly: when we feed these tabula rasa ML systems a bunch of data about the world as it is, and they come back puking out patterns of discrimination; can that serve as •evidence of bias• not just in AI, but in •society itself•?
If training an ML system on a company’s past hiring decision makes it think that baseball > softball for an office job, isn’t that compelling evidence of hiring discrimination?
6/
There’s an ugly question hovering over that previous post: What if the men •are• intrinsically better? What if discrimination is correct?? What if the AI, with its Perfect Machine Logic, is bypassing all the DEI woke whatever to find The Actual Truth??!?
Um, yeah…no.
A delightful tidbit from the article: a researcher studying a hiring AI “received a high rating in the interview, despite speaking nonsense German when she was supposed to be speaking English.”
These systems are garbage.
7/
I mean, maaaaaaybe AI can help with applicant screening, but I’d need to see some •damn• good evidence that the net effect is positive. Identifying and countering training set bias, evaluating results, teasing out confounders and false successes — these are •hard• problems, problems that research work long months and years to overcome.
Do I believe for a hot minute that companies selling these hiring AIs are properly doing that work? No. No, I do not.
8/
AI’s Shiny New Thing promise of “your expensive employees are suddenly replaceable” is just too much of a candy / crack cocaine / FOMO promise for business leaders desperate to cut costs. Good sense cannot survive the onslaught.
Lots of business right now are digging themselves into holes now that they’re going to spend years climbing out of.
9/
Doing sloppy, biased resume screening is the •easy• part of HR. Generating lots of sort-of-almost-working code is the •easy• part of programming. Producing text that •sounds• generally like the correct words but is a subtle mixture of obvious, empty, and flat-out wrong — that’s the •easy• part of writing.
And a bunch of folks in businesses are going to spend the coming years learning all that the hard way.
10/
@inthehands I think there is one exception--for a lot of people in creative fields who may have some kind of borderline ADHD condition, getting past the blank page or the digital equivalent is a real struggle. And if there's something that can push them past that step from nothing to something, they'll find it useful.
There's a powerful temptation to just use version zero, though, especially if you're not the creator but the person paying the creator.
@mattmcirvin Indeed, I ran a successful exercise much along these lines with one of my classes (see student remarks downthread):
https://hachyderm.io/@inthehands/109479808455388578
I think there really is a “there” there with LLMs; it just bears close to no resemblance to the wildly overhyped Magic Bean hysteria currently sweeping biz. Generating bullshit does actually have useful applications. But until the dust settles, how much harm will it cause?