In the rapidly evolving world of recruitment technology, artificial intelligence (AI) stands as a beacon of promise, revolutionizing the way organizations identify and attract talent. Yet, a groundbreaking new study published in Humanities and Social Sciences Communications by Luo, Zhang, and Mu (2025) casts a revealing light on the unintended consequences of AI-enabled interviews. Contrary to the prevailing narrative that automation streamlines hiring and enhances candidate experience, this comprehensive research warns that highly automated AI interviews may erode applicants’ confidence and diminish their desire to pursue job opportunities.
The crux of the study revolves around the psychological and perceptual impact of AI in hiring processes, especially regarding procedural justice—the fairness and transparency of the procedures used to evaluate candidates—and organizational attractiveness, which denotes how desirable an employer appears from a candidate’s perspective. The research reveals a paradox: while AI systems potentially increase efficiency and objectivity, they simultaneously generate skepticism or negativity among job seekers, leading to reduced intention to apply.
What’s particularly revealing is the notion that candidates perceive highly automated AI interviews as impersonal and rigid, contributing to a sense of unfairness. Procedural justice theories traditionally emphasize the importance of consistency, transparency, and voice in evaluation processes. Yet, AI interviews, by design, limit candidates’ ability to express themselves interactively and to receive immediate human feedback, creating a perception gap that negatively impacts how candidates evaluate the fairness of such systems. This lack of human nuance in AI interactions is not merely a technological challenge but a psychological hurdle that organizations must address.
Moreover, the study probes organizational attractiveness, which is a multi-dimensional concept, tied closely to employer branding, culture, and perceived ethical standards. Candidates increasingly assess prospective employers based on authentic human engagement and responsiveness during recruitment. The deployment of AI-enabled interviews, when perceived as overly mechanized, can diminish an employer’s allure, signaling a transactional rather than relational approach to talent acquisition. This perception can especially be detrimental when competing for top-tier talent who value meaningful connections early in their job search.
A striking dimension of Luo et al.’s findings is the differential impact of AI interview formats based on industry context. Specifically, candidates exhibited a more positive reaction towards AI interviews within high-tech industries compared to low-tech sectors. This suggests a critical compatibility factor: applicants’ expectations and comfort with technology shape their receptivity to automation. In environments where technological innovation is culturally embedded and expected, AI interviews align better with organizational identity and candidate mindset, thereby mitigating negative perceptions.
Delving deeper, the researchers carried out rigorous empirical analyses, sampling diverse job seeker populations and simulating various interview conditions with differing degrees of AI automation. The results consistently showed that full automation, where candidates interact solely with AI without human intermediation, correlated with lower procedural justice perceptions and organizational attractiveness ratings. Intermediate models that incorporated AI-assisted but human-mediated interviews fared better, underscoring the need for balanced hybrid approaches.
The study also discusses the importance of transparency in AI interview processes. Transparency refers not only to informing candidates about the use of AI but also involves clarifying how AI evaluates responses and the criteria for advancement. The lack of clear communication about AI decision-making mechanisms contributed to candidates’ distrust and skepticism, fostering a “black box” perception. Enhancing openness could alleviate concerns by demystifying automation, thus restoring some measure of procedural fairness.
From a technological perspective, these insights challenge the current trajectory of recruitment automation. The integration of natural language processing, computer vision, and adaptive learning algorithms in AI interviews aims to replicate human evaluators but often falls short of capturing the emotional, contextual, and ethical subtleties critical to judging complex human interactions. This gap raises urgent questions about AI’s limitations and the ethical responsibilities employers bear when deploying such systems.
The research further highlights how perceived impersonality and lack of empathy within AI interviewing exacerbate anxiety and uncertainty among candidates. Unlike traditional interviews, where rapport building and active listening foster mutual trust, AI systems risk reducing the recruitment process to a transactional checklist. Such experiences may discourage qualified candidates from engaging fully or applying at all, thereby narrowing the talent pool and potentially compromising organizational diversity and innovation.
Organizations stand at a crossroads as they weigh the operational benefits of AI-enabled interviews against their strategic implications for talent acquisition. Efficiency gains in scaling candidate assessments are undeniable, but Luo and colleagues urge companies to adopt a holistic evaluation framework that incorporates candidate experience metrics alongside recruitment KPIs. Failing to do so could inadvertently damage employer reputation and lead to long-term recruitment challenges.
Pragmatically, the study motivates the design of AI hiring tools that integrate human judgment at critical touchpoints. Hybrid models enabling human reviewers to interpret AI outputs, provide feedback, and contextualize results appear more promising. Additionally, embedding mechanisms for candidate voice and feedback during AI interactions can restore perceptions of procedural justice and reinforce organizational attractiveness.
This research is a clarion call for greater interdisciplinary collaboration among AI developers, organizational psychologists, and HR practitioners. Understanding candidate psychology and industry-specific norms enables the crafting of AI hiring systems that are sensitive to human concerns and contextual variables. Moreover, these findings encourage ongoing innovation aimed at transparency, explainability, and fairness in AI algorithms that inform hiring decisions.
The industry-specific findings raise intriguing possibilities for customizing AI interview frameworks according to sectoral cultures. In technology-driven contexts, candidates may perceive AI tools as natural extensions of their work environments, fostering acceptance and stronger application intent. Conversely, in sectors with less digital sophistication, organizations must recalibrate AI adoption strategies to maintain human connection and trust.
At a societal level, this study invites reflection on the broader implications of automation in workforce entry points. If candidates develop negative attitudes towards AI-mediated hiring, this might generate systemic barriers to employment equity and inclusion. Ensuring that progress in recruitment technology does not inadvertently marginalize certain groups is an ethical imperative that resonates beyond individual companies.
In parallel, regulators and policymakers could consider guidelines or standards for deploying AI in recruitment, emphasizing fairness, transparency, and candidate rights. Establishing frameworks for accountability—such as auditing algorithms for biases or ensuring recourse mechanisms for applicants—would align technological advances with social justice goals.
The study by Luo and colleagues is a pivotal contribution to understanding the nuanced interplay between automation, candidate perceptions, and hiring outcomes. It pushes the conversation past simplistic efficiency narratives and foregrounds the human experience as a core metric of success. As businesses continue to explore AI frontiers in talent acquisition, these insights provide a foundational roadmap for responsible and effective implementation.
In conclusion, AI-enabled interviews, while revolutionary, are not a panacea—particularly when deployed without careful attention to candidate psychology and industry context. Organizations must strive for a balanced approach that preserves human touchpoints, fosters transparency, and adapts to cultural expectations. Only then can AI serve as a true ally in building fair, attractive, and effective hiring ecosystems that benefit both employers and applicants alike.
Subject of Research: The impact of AI-enabled interviews on job seekers’ perceptions of procedural justice, organizational attractiveness, and their intention to apply for jobs.
Article Title: Why might AI-enabled interviews reduce candidates’ job application intention? The role of procedural justice and organizational attractiveness.
Article References:
Luo, W., Zhang, Y. & Mu, M. Why might AI-enabled interviews reduce candidates’ job application intention? The role of procedural justice and organizational attractiveness. Humanit Soc Sci Commun 12, 1278 (2025). https://doi.org/10.1057/s41599-025-05607-z
Image Credits: AI Generated