Your next job interviewer might not have a pulse.
On April 28, Amazon unveiled Connect Talent — an agentic AI system that finds, screens, interviews, and evaluates job candidates around the clock. No human involvement until the final hiring decision. It’s not a chatbot on a careers page. It’s a full-stack autonomous hiring agent, and Amazon is selling it to every enterprise that hires at scale.
They even coined a philosophy to soften the blow: humorphism.
How Connect Talent Actually Works
Connect Talent runs on AWS and targets high-volume hiring — retailers staffing for Black Friday, logistics companies ramping for peak season, anyone processing thousands of applicants.
The pipeline is fully autonomous:
- Sourcing — AI agents identify and reach out to candidates
- Screening — automated evaluation of qualifications
- Interviewing — AI-led voice interviews, available 24/7, testing problem-solving, logic, and job-specific skills
- Scoring — candidate scores with full transcripts and “clear job-related evaluation reasoning”
- Decision support — recruiters get a dashboard to make the final call
AWS SVP Colleen Aubrey confirmed candidates will know they’re talking to AI. She also admitted the system is “still being refined to sound more convincingly human.” Refreshingly honest. Mildly terrifying.
Amazon hired around 250,000 seasonal workers in 2025. Connect Talent is the productized version of what they’ve been building internally — now available to anyone willing to pay.
“Humorphism” — Philosophy or PR?
Amazon defines humorphism as the idea that AI should “adapt to how humans work, not the other way around.” Aubrey framed it as translating “the human behaviors of working together into a product.”
It’s a deliberate counter-narrative to the anxiety about AI replacing people. Instead of “our AI eliminates human interviewers,” Amazon says “our AI behaves like a human colleague.”
The irony writes itself. Amazon has laid off roughly 30,000 corporate employees since October 2025, with some cuts explicitly tied to “efficiencies gained through AI use.” Humorphism or not, the math doesn’t lie.
The Ghost of 2018
Amazon has been here before — and it went badly.
In 2018, Reuters revealed Amazon had secretly built an AI recruiting tool that rated candidates one to five stars. The problem: the system taught itself to penalize women. Trained on a decade of male-dominated tech resumes, it downgraded applications containing the word “women’s” and penalized graduates of all-women’s colleges. It favored action verbs like “executed” and “captured” that appeared more on male engineers’ resumes.
Amazon scrapped the project entirely. They couldn’t be sure they’d eliminated the bias.
Eight years later, Connect Talent launches with promises of “reduced human bias” and “complete audit trails.” Every interaction is logged with job-related evaluation reasoning. It’s clearly designed to exorcise the ghosts of 2018.
Whether the underlying problem is actually solved is a different question.
The Legal Minefield Is Already Live
Connect Talent enters a landscape that’s increasingly hostile to AI hiring tools.
The EEOC secured its first AI hiring discrimination settlement — $365,000 — setting precedent that companies are liable for biased algorithms. HireVue and Intuit faced EEOC charges in March 2025. Research through VoxDev found AI hiring tools “systematically favored female applicants over Black male applicants with identical qualifications.”
New York City already requires bias audits for AI hiring tools. Illinois restricts AI video interview analysis. The EU AI Act classifies employment AI as “high-risk.” Multiple states have proposed similar legislation.
Amazon claims Connect Talent generates “explainable” scores. But explainability and fairness aren’t the same thing. You can explain exactly how a biased system reached a biased conclusion.
What This Actually Signals
The screening interview is dying. For high-volume roles, human phone screens were already a bottleneck. AI can do them faster, more consistently, and at 2 AM. This will become standard for hourly and seasonal work within years.
The human recruiter role is transforming. Amazon positions the human as final decision-maker. Recruiters review AI scores and transcripts, then decide. But when the AI is right 95% of the time, does the human add value or just liability?
Candidate experience cuts both ways. Some applicants will love interviewing in pajamas at midnight. Others will feel dehumanized discussing their career with a machine. The candidates who refuse AI-first hiring may become a new kind of talent filter.
The Humorphism Paradox
There’s something philosophically strange about Amazon’s pitch. They built AI to replace human interviewers, then engineered the AI to act human so you don’t notice the humans are gone.
The word itself is telling — the inverse of anthropomorphism. Instead of humans projecting human qualities onto machines, a corporation is deliberately engineering human-like qualities into AI to smooth over the removal of actual humans.
It’s also a tacit admission that people don’t want AI making high-stakes life decisions about their employment. If they did, Amazon wouldn’t need to spend engineering effort making it sound “more convincingly human.”
Connect Talent will succeed commercially. Companies spending millions on seasonal hiring will jump at anything that cuts time-to-fill and cost-per-hire. But somewhere right now, a civil rights attorney is bookmarking Amazon’s product page.
The race to automate hiring is on. The race to govern it has barely started.