In an era increasingly dominated by artificial intelligence, the question of how reliance on AI affects human cognition has become both urgent and complex. A groundbreaking study recently published by the American Psychological Association casts new light on this conversation by exploring the nuanced ways that AI assistance can impact not our raw cognitive ability but rather our confidence, ownership of ideas, and depth of independent reasoning.
The study, involving a diverse sample of 1,923 adults from the United States and Canada, tasked participants with completing a series of simulated work challenges using commercially available AI programs. These challenges were carefully designed to reflect real-world cognitive demands, including planning with incomplete or evolving information, interpreting ambiguous data, and articulating strategic decision-making processes. The research sought to understand not only the extent of AI reliance but its consequences on executive function and personal cognitive engagement.
Remarkably, the study revealed that over half of the participants—58%—felt that the AI did the bulk of the “thinking” involved in completing their assignments. This subjective sense of AI dominance was most pronounced in complex tasks like planning and sequencing, where the cognitive load and decision complexity tend to be higher. Those who perceived AI as the primary cognitive driver subsequently reported diminished confidence in their own reasoning abilities, as well as a lower sense of ownership over the ideas generated.
This diminished perceived authorship resonates deeply with ongoing concerns about the cognitive offloading phenomenon, where humans delegate cognitive responsibilities to external devices or systems. Such offloading, while efficient, holds the potential risk of eroding personal intellectual engagement over time. The trade-offs users made between task speed and depth of thought exposed a behavioral pattern—participants frequently prioritized rapid task completion over thorough cognitive processing when heavily relying on AI.
Gender differences emerged subtly but consistently in the study’s findings, with male participants exhibiting a higher degree of AI reliance than their female counterparts. This facet underscores the importance of examining socio-cultural and psychological factors that intersect with technology use, suggesting that AI integration in professional settings might differentially influence cognitive engagement across demographics.
However, an encouraging counterpoint surfaced: participants who actively interrogated the AI’s outputs by modifying, challenging, or rejecting its suggestions reported enhanced confidence in their own reasoning. They felt a more robust sense of intellectual ownership, highlighting the critical role of active oversight in AI-enabled workflows. This dynamic indicates that the problem does not stem from AI usage itself but from passive acceptance of its outputs.
Sarah Baldeo, MBA and PhD candidate at Middlesex University specializing in AI and neuroscience, emphasizes the distinction between AI assistance and overreliance. She posits that maintaining active judgment—essentially human-in-the-loop oversight—empowers users to leverage AI as a tool rather than a crutch. This principle reflects broader cognitive science insights about metacognition, where awareness and regulation of one’s own thinking patterns foster deeper learning and problem-solving.
It’s important to note that the correlational design of the study cannot establish causal relationships but offers compelling behavioral evidence about the attenuation of executive functions in high-usage contexts. Executive functions, including working memory, cognitive flexibility, and inhibitory control, underpin the ability to plan, adapt, and make strategic decisions. Their attenuation due to cognitive offload to AI has significant implications for workplace productivity and innovation.
Developers of AI systems are urged to embed design features that discourage blind reliance and instead promote critical reflection by users. For instance, AI interfaces might incorporate prompts encouraging users to generate alternative solutions or to reassess underlying assumptions. Such interactive mechanisms could counteract the cognitive disengagement that passive AI acceptance fosters.
Baldeo further advises a strategic approach to AI integration, urging users to “train AI rather than letting it train you.” This approach advocates programming AI for tailored tasks rather than anthropomorphizing it or allowing its outputs to shape human thinking automatically. By fostering a partnership mindset where AI serves specific functions within well-defined boundaries, users can preserve their cognitive autonomy and creativity.
From a practical standpoint, Baldeo suggests initial attempts to solve problems independently before consulting AI to preserve cognitive effort. She also recommends iteratively refining AI prompts to engage one’s own analytical faculties more deeply, resulting in higher-quality and more customized AI responses. Additionally, periodic breaks from AI usage—spanning two to three days per week—are proposed to mitigate “intellectual leveling,” a phenomenon where overexposure to AI-generated language homogenizes human communication styles, potentially inhibiting originality.
Ultimately, this emerging evidence highlights a delicate balance at the intersection of technology and human cognition. The long-term risks of AI reliance may not manifest as reduced intelligence per se but rather as decreased engagement with complex cognitive work that fuels novel thinking and innovation. Recognizing and addressing this distinction is critical for individuals and organizations seeking to harness AI’s benefits without compromising intellectual rigor.
In the swiftly evolving landscape of AI-enabled work, respecting the nuances of human cognition and maintaining active intellectual engagement will be central to realizing sustainable symbiosis between human and machine intelligence. The study by Baldeo and colleagues serves as a clarion call for mindful AI integration, advocating for designs and behaviors that enhance rather than erode the uniquely human faculties of insight, judgment, and creativity.
Subject of Research: People
Article Title: Generative Artificial Intelligence Reliance and Executive Function Attenuation: Behavioral Evidence of Cognitive Offload in High-Use Adults
News Publication Date: 16-Apr-2026
Web References: https://www.apa.org/pubs/journals/releases/tmb-tmb0000191.pdf
References: Baldeo, S. (2026). Generative AI Reliance and Executive Function Attenuation: Behavioral Evidence of Cognitive Offload in High-Use Adults. Technology, Mind, and Behavior. DOI: 10.1037/tmb0000191
Keywords: Artificial intelligence, cognitive offload, executive function, AI reliance, human cognition, metacognition, AI integration, workplace productivity, AI-assisted decision-making, cognitive engagement

