What Talent Leaders Need to Know — and Do — About the Upsurge in Candidate Cheating
Is there a new cheating crisis?
Not yet, but over a year ago, I raised the alarm about the growing risk of candidate cheating and suggested we, as talent leaders, start talking to our hiring teams about this to develop a point of view (not everyone thinks it’s cheating if a candidate uses AI) and to start to mitigate hiring fraud risks.
In mid-2025, this concern is no longer hypothetical. Advances in candidate AI tools + remote interviewing + inexperienced interviewers who read questions from interview guides + standardized online assessments + an uncertain economy creating more candidates than jobs + identity fraud = current (high) risk.
OK, it’s a risk, but is this really already happening now?
I was leading a TA leadership workshop in New York, and a VP of TA for a big company said she’s now working closely with her chief information security officer to figure out how to mitigate candidate cheating, as it’s already common enough to raise serious concerns about 1) whether they’re hiring the person they think they’re hiring and 2) whether the candidate is lying about their capabilities.
They’re looking at tech to separate real humans from AI in video interviews, as typical recruiters and hiring managers aren’t sure how to uncover if they’re talking to the person they think they’re talking to, whether they can believe what they’re hearing, and what to do on a Zoom call if they suspect fraud or cheating.
Gartner shared this with their CHRO members: “Strategic Planning Assumption: By 2028, one in four candidate profiles worldwide will be fake.” I have no idea what the fake rate is now, but it’s probably increased since 2023. I don’t need to tell you about the fake resumes we’re getting bombarded with — one of our clients said they’re often seeing six to 10 almost-identical resumes coming from the same person or bot, with minor changes to names, emails, and bullets to improve their perceived chances of passing through our filtering. I’m not going to mention the names of the companies that will blast your resume to 5,000 recruiters and career sites for $100, but you know that finding the signal in the noise created by mass-apply bots is going to be terrible going forward.
Earlier this year, Google’s CEO Sundar Pichai suggested hiring managers return to in-person interviews due to concerns about identity fraud, while Deloitte U.K. shifted its early career interviews back onsite after fraud indicators emerged.
Bottom line: If you’re running virtual interviews or assessments today, cheating is already happening. The question isn’t “If?” it’s “How much?” And, more importantly, “What are you going to do about it?”
What’s changed in 2025 — and what’s coming next
Here’s how cheating and fraud have evolved this year, and why it could get worse in 2026:
- Real-time AI prompting tools now whisper answers to candidates or pop responses on mirrored screens. Picture real-time teleprompting tools. One of our clients had a candidate invite their own personal AI bot to join the Zoom interview to “take notes.” In reality, it was listening to the hiring manager’s questions and prompting the candidate with near-perfect answers.
- Deepfake tech enables face- and voice-masking, making it possible for a completely different person to sit through an interview on behalf of a candidate. Picture this: You think you’re interviewing John, but really you’re interviewing a professional interviewer who is much more capable and who has had John’s face “projected” onto the fake candidate’s face. What? Yes! This is nuts.
- AI assistants writing code live during tech interviews or responding to case study questions in two seconds. Picture a candidate going to online sites to find typical interview questions, coding or design tests, case study problem-solving exercises, etc., for your company. Then they come to your interview trained and ready to answer questions in a way that makes them sound extremely qualified for you based on the job description and “perfect answers” to your questions.
- And then there’s the even more serious fraud, where bad actors are sending their candidates through your process to gain competitive insights and get hired so that they can steal your IP or hack into your systems and wreak havoc.
What’s coming?
It’s easy to imagine the kind of progress that we’ll see in 2026, especially if the job market remains challenging for entry-level talent and super competitive for high paying jobs at big-brand companies — that’ll only increase the demand for tools to give candidates an edge. What do you think we’ll see? I predict much better impersonation tech, much better candidate tools to ace traditional interviews, and a lot of bot-to-bot recruiting — an arms race where the employers are playing catch-up to the candidate bots and tools, exacerbated by automation and human-free processes aimed at efficiency.
I think if you look even further out — as AI agents take a more active, more independent role in executing on our requests — we’ll see job seeker bots that will 1) auto-apply with very tailored resumes, applications, and work samples, 2) auto-reply to InMails, emails, and texts, 3) auto-generate targeted networking and apply emails directly to the hiring manager’s email, and 4) engage directly with employer/recruiter bots to check for an early two-way match. (My job-seeker agent will know what I want and need, and will be able to screen the job and company culture and pay and working hours and location — including bus routes and parking — and many other factors to see if it’s a viable opportunity I’d want to pursue, before any human-to-human interaction is required by me, the candidate, or the employer.)
How to diagnose the risk at your company
Use this checklist to assess whether cheating may be undermining your hiring decisions:
- Have the pass rates for your online tests changed dramatically in the last 12 months? Are you reusing assessment content and unproctored virtual tools to assess talent with no identity verification? You need to dig into why pass rates have improved and revisit your use of online assessments to ensure they’re still adding value.
- Do recruiters notice interview start delays, odd eye movements, or voice lag on Zoom interviews or recorded video interviews? There may be some kind of AI projection or listening/prompting tool helping the candidate.
- Are candidates stronger in their online or Zoom assessments than in their in-person interview performance? Your screener may have experienced a super candidate, with AI and/or someone other than the candidate completing your screen.
- Are hiring managers reporting concerns about new hires who seemed great in interviews but struggle to perform when asked to do things they seemed capable of doing in the interviews? This has happened for years, even pre-AI, of course. We all make hiring mistakes. But if you see a pattern, it’s time to pause and revisit your whole interviewing process.
- One of our client’s CHROs told me that they hired a VP-level candidate who it turned out had clearly faked their way through their virtual interviews and — once hired — couldn’t do the job. It took over four weeks to build the performance record and complete the investigation needed to fire them for fraud and performance issues. This was a costly, highly visible hiring mistake.
10 things talent acquisition leaders should be doing now
- Evaluate the tools and assessments you’re using to screen and interview candidates. Ask your vendors about what they can help you do to avoid fraud and cheating and ensure they’re building tech that’s not going to get you in trouble.
- Reach out to your chief security officer or CIO and start a conversation about how you’re leveraging tech to evaluate candidates. Ask them about the practices and tech they’re putting in place in other parts of your business to reduce fraud risks and ensure security breaches don’t happen.
- Consider using proctored tests, with live humans evaluating live humans. And don’t use the same assessments or interview questions for every candidate. It would be trivial for a candidate today — using just their mobile phone, sitting next to their laptop — to record a live interview, have AI scrape the questions asked, turn them into text, generate great answers, and share that on the web as a prebuilt “ace the interview” guide for your jobs.
- Develop a policy on what candidate cheating is and isn’t and add that to your career site (example here from Ericsson). Be clear about what’s OK and what’s not OK.
- Train your recruiters to spot cheating and fraudulent behavior. This can look like robotic phrasing, perfect answers, excessive pausing, and video/audio lag or jaggedness.
- Train your recruiters to effectively screen candidates and lead quality interviewing and hiring decisions processes.
- Train your interviewers and hiring managers how to interview effectively. And if you decide that employees can and should use AI once hired to help them in their jobs, then be sure they know during candidate interviews how to focus on the non-AI-assisted skills their new hires will need to succeed.
- Return to in-person interviews for finalists for key roles, and then leverage more skill-application (show me) interviewing techniques than behavioral (tell me) ones. The risk is just too high if you’re hiring someone by using knowledge-based Zoom interviews and online assessments. Answers to even challenging questions are now just a simple prompt away with AI.
- Work with HR and IT/security and legal teams to determine how you can better validate identity early in the recruiting process.
- Learn as much as you can about AI. Read, research, play around with it on your phone and computer, talk to vendors who are building tools to sell to TA, talk to your hiring teams about what they’re building and buying, and talk to peers to see what they’re learning if you’re behind the curve a bit.
Final thoughts
The systems we’ve built for speed and efficiency in a world that moved to remote work and Zoom interviewers now require new safeguards to protect us from fraud, security risks, and bad hires.
I don’t think it’s a crisis and I think most people are generally good and aren’t trying to cheat or commit fraud. But in this economy, with all the tools employers and candidates have access to, with all the risks outlined by research firms and TA leaders, it’s time we as TA leaders lead the conversation around this with our teams and engage legal and IT and security to mitigate risks.
I’d love to learn more about what you’re seeing and what you’re doing to mitigate risks. I’ll post this on my personal LinkedIn feed so you can share.
Copyright Recruiting Toolbox, Inc.
John Vlastelica is a former corporate recruiting leader with Amazon and Expedia turned consultant. He and his team at Recruiting Toolbox are hired by world-class companies to train hiring managers and recruiters, coach and train TA leaders, and help raise the bar on who they hire and how they hire. If you’re seeking more best practices, check out the free resources for recruiters at TalentAdvisor.com and for recruiting leaders at RecruitingLeadership.com. And if you’re a head of TA from a large company, check out www.RLL50.com for info on our special workshop just for senior recruiting leaders, where we’ll dig into the impact of AI on our TA orgs, redefine the role of the recruiter, and dig into best practices for driving adoption of new tech and role expectations with our recruiters and hiring managers.