For as long as people have been hiring, bias has been part of the process. Sometimes it’s blatant, sometimes it’s invisible, but it’s always there. The words we use in job descriptions, the schools we value most, the hobbies we unconsciously associate with competence - these small decisions accumulate into patterns that shape who gets hired and who gets left behind.
Historically, hiring bias came in two main flavors: conscious bias (deliberate preference or prejudice) and unconscious bias (subconscious assumptions about people based on stereotypes). Even when well-meaning, recruiters could end up favoring candidates who “felt right” because they looked, spoke, or thought like them. That “gut feeling” was often just bias wearing a friendly mask.
When bias went digital
The hope with AI was simple: let the machines make the process more objective. No more subjective hunches - just data. But AI doesn’t appear out of thin air; it learns from the data it’s fed. And when that data reflects a history of biased decisions, the AI simply learns to repeat them - faster and at scale.
The headlines tell the story:
- Amazon’s resume screener penalized women’s resumes because it was trained on ten years of male-dominated applications.
- An AI resume ranking study from the University of Washington found significant racial, gender, and intersectional bias in how large language models scored candidates - sometimes purely based on their names.
- At one unnamed company, candidates listing hobbies like “baseball” or “basketball” were scored higher, while “softball” lowered scores - not because sports skills mattered for the job, but because the AI linked them to past “successful” (often male) hires.
- Google’s early hiring AI deprioritized underrepresented groups, because it was trained on a workforce dominated by Ivy League-educated white men.
In the U.S., such cases have already led to lawsuits - CVS Health over AI-based video interviews, and Workday facing ongoing litigation for alleged discrimination in its AI tools. So far, no major AI hiring bias cases have reached the courts in Europe, partly because EU law focuses on preventing bias before it happens, while U.S. law typically addresses it after someone sues.
The new generation of AI - context over keywords
The AI of 2018 wasn’t great at nuance. It looked for patterns in the data - keywords, schools, certain job titles - without understanding the “why.” That’s how “baseball” could become a proxy for “team player” without considering that “softball” might demonstrate the same trait.
Generative AI, which rose to prominence in 2023, can understand more context. It can be trained to look at a hobby not for whether it matches the company’s existing employees, but for the underlying skills it might signal.
- Playing a team sport might suggest collaboration.
- Being a pro athlete might demonstrate discipline and commitment.
- A love for reading might show information processing ability or creativity.
If designed well, today’s AI can ignore irrelevant or discriminatory data points like gender, race, or age, while still drawing meaningful insights from a candidate’s experiences.
The promise: AI as a bias reducer
AI can actually help address bias - but only if we use it intentionally:
- Anonymizing applications so names, ages, and genders aren’t factored into early screening.
- Flagging gender-coded language in job descriptions (“rockstar,” “ninja,” or the gaming slang “cracked”) that might unconsciously skew the applicant pool.
- Highlighting non-traditional candidates whose transferable skills fit the role, even if their CV looks unconventional.
- Standardizing interview scoring to minimize subjective variation between interviewers.
By doing this, AI can help recruiters break the “we only hire people like us” cycle and widen their search to talent they might have overlooked.
The upside: the importance of human involvement
The danger isn’t that AI will reject the weakest candidates - it’s that it could filter out the strongest ones for the wrong reasons. Even the most sophisticated AI can’t (yet) interpret everything that makes someone a great teammate: adaptability, subtle communication skills, emotional intelligence, and the kind of creative problem-solving that shows up in the moment.
As Oxford Professor Sandra Wachter says:
“There is a very clear opportunity to allow AI to be applied in a way so it makes fairer, and more equitable decisions that are based on merit and that also increase the bottom line of a company. However, no matter how efficient AI becomes, there will always be a role for the human.”
Humans bring intuition, empathy, and the ability to weigh context that isn’t captured in a data set. We can read the room, sense a mismatch between words and actions, and understand cultural nuance - things AI still struggles with. But that strength only matters if we recognize that human judgment is also prone to bias. ****Just as AI can inherit bias from its training data, people carry their own assumptions into the process. This is why training recruiters and hiring managers to recognize and challenge their own biases is as important as building fair AI systems. Structured interviews, standardized evaluation criteria, and awareness training help ensure that human oversight adds value rather than introducing new blind spots. In other words, reducing bias in recruitment isn’t just about smarter machines - it’s also about more self-aware humans.
How Talentium works with bias
At Talentium, we’re building an AI-powered recruitment platform, and that means the question of bias isn’t just theoretical - it’s something we have to address head-on.
We believe that the best way forward is collaboration between humans and AI. That’s why we approach bias from two angles: how the platform is developed and how it is used by recruiters. On the development side, we use state-of-the-art models from partners like OpenAI and Google, who invest heavily in bias reduction and fairness. We don’t train our own models, which avoids the risk of reinforcing the narrow biases of a single dataset, and we ensure that sensitive attributes like name, gender, race, or age are excluded from evaluation.
On the usage side, Talentium is designed to keep recruiters in control. Every key action - saving a candidate, sending outreach, or moving someone forward - is always made by the human. Our AI provides smart suggestions and reasoning, but never automates judgment. And crucially, every AI recommendation is explainable: we show exactly why a candidate appears in a search or why they’re marked as a match, so decisions are never just a “black box.”

This is combined with a broader approach to candidate discovery. Talentium widens pools by pulling from multiple sources, highlights transferable skills rather than surface-level traits, and avoids overweighting a single attribute like title or employer. The result is that unconventional candidates - the ones who might otherwise be overlooked - get visibility too. In this way, AI becomes a tool for fairer, more transparent, and more inclusive hiring, while keeping human judgment at the center.
The next chapter: human judgment powered by AI
Bias in hiring has shifted from gut-driven decisions to algorithm-driven scoring, but it hasn’t disappeared. The question now isn’t whether AI will replace humans in hiring - it’s how humans and AI can work together to make better decisions.
Let AI do what it’s best at: sorting vast numbers of applications, flagging potentially biased patterns, and surfacing candidates with the right skills. Let humans do what we’re best at: interpreting nuance, challenging assumptions, and making the final call.
Because the goal isn’t just to hire quickly. It’s to hire fairly, build stronger teams, and open doors to talent that might otherwise be overlooked. AI can help us get there - but only if we remember that the ultimate judgment should rest with the human recruiters.








