Recent research led by prominent universities has revealed significant insights into the intersection of artificial intelligence (AI) and cybercrime, challenging prevailing narratives about the capabilities of cybercriminals in employing cutting-edge technology. Scrutinizing an unprecedented dataset of over 100 million posts sourced from underground cybercrime forums, the study offers a nuanced understanding of how AI tools—from generative AI models like ChatGPT to AI-powered coding assistants—are being leveraged by cybercrime communities. Contrary to widespread alarmist reports, the findings suggest that the technological prowess within these illicit networks is limited, tempering fears of an imminent AI-driven cybercrime revolution.
The intricate analysis, conducted by researchers from the Universities of Edinburgh, Cambridge, and Strathclyde, harnessed sophisticated machine learning techniques alongside meticulous manual review. The team focused on discussions dating from the release of ChatGPT in late 2022, a pivotal moment marking rapid public access to highly capable generative AI systems. Their goal was not only to identify AI adoption patterns but also to ascertain whether these advancements are translating into tangible operational benefits for cybercriminals. The answer, as it unfolds, reveals a complex and somewhat underwhelming picture.
Fundamentally, the study found that cybercriminals predominantly apply AI to circumvent traditional detection mechanisms employed by cybersecurity defenders. For example, generative models are used to obscure recognizable patterns in malicious code or communications, complicating automated or heuristic-based defense systems. Additionally, the use of AI-driven social media bots has enabled certain cybercrime actors to execute coordinated harassment campaigns, particularly targeting women. These bot networks operate at scale, facilitating fraudulent schemes and monetizing harassment with alarming efficiency.
Interestingly, the use of AI is not democratizing cybercrime in the manner some experts feared. While tools such as AI coding assistants are indeed employed, they primarily benefit actors who already possess advanced skills. The deployment of these tools requires significant knowledge, and novice criminals often remain unable to harness AI’s full potential. This suggests that AI neither dramatically lowers the technical barriers to cybercrime nor rapidly expands the pool of capable criminals; instead, it augments the capabilities of established practitioners.
The researchers identified emerging use cases of AI in automating complex cybercriminal tasks, especially in areas such as social engineering and bot farming. Automation frameworks enhanced with AI facilitate persistent phishing attacks, adaptive scam dialogues, and management of large-scale botnets. Nonetheless, these innovations represent evolutionary improvements built on existing, industrialized criminal infrastructures, rather than revolutionary leaps that disrupt the status quo.
One pivotal aspect addressed in the study concerns the role of guardrails integrated into major AI chatbot platforms. These safeguards—designed to restrict harmful outputs—appear to be effective in limiting direct cybercriminal misuse. However, the researchers observed early signs that underground communities are attempting to circumvent these restrictions by manipulating chatbot outputs through sophisticated prompt engineering and adversarial techniques. This cat-and-mouse dynamic between AI developers and malicious users highlights an ongoing frontier in AI security.
Beyond the internal dynamics of cybercrime adoption, the study reveals a broader sociotechnical context. Many cybercriminals expressed anxiety about AI’s disruptive impact on legitimate IT sector jobs, fearing displacement due to automation in mainstream software development. This apprehension, paradoxically, may incentivize a shift toward illicit activities, potentially swelling cybercrime ranks as AI reshapes labor markets.
While the immediate threats posed by AI-enhanced cybercriminal tools appear contained, the researchers sound a cautionary note regarding the proliferation of autonomous, agentic AI systems. These AI entities possess the capacity to make independent decisions and execute tasks without human oversight—a development that could escalate cyber threat landscapes if deployed insecurely. Similarly, vulnerabilities introduced by “vibecoded” software—code generated or heavily assisted by AI in legitimate industries—could inadvertently create new attack vectors accessible even to low-skill actors.
The findings, published ahead of a presentation at the Workshop on the Economics of Information Security scheduled for June 2026 in Berkeley, USA, underscore a critical pivot in cybersecurity discourse. According to Dr. Ben Collier, a senior lecturer involved in the research, the principal danger lies not in cybercriminal adoption of AI but in the unintentional security risks emerging from widespread AI integration in industry and public domains. This realignment of threat perception calls for heightened vigilance in securing AI-driven systems before they can be weaponized effortlessly by opportunistic adversaries.
The study’s comprehensive approach—blending quantitative analysis of massive datasets with qualitative insights into underground forum communications—sets a new standard for understanding cybercrime ecosystems in the AI era. By dissecting the lived realities of these communities, the research offers policymakers, security professionals, and the public a grounded appraisal of AI’s dual-use nature. Far from being a simple harbinger of doom, AI’s role in cybercrime is characterized by incremental change, constrained adoption, and evolving challenges that demand sophisticated, anticipatory defense strategies.
In sum, this landmark study tempers unrestrained fears surrounding AI and cybercrime. It urges technology creators and adopters alike to focus on securing AI applications themselves, ensuring guardrails keep pace with advancing capabilities. As cybercriminals experiment tentatively with AI tools, the greater threat lies in how those same tools, poorly safeguarded, could empower even unskilled actors to launch devastating attacks, thereby shifting the cybersecurity landscape in unpredictable ways.
Subject of Research: Not applicable
Article Title: Stand-Alone Complex or Vibercrime? Exploring the adoption and innovation of GenAI tools, coding assistants, and agents within cybercrime ecosystems
News Publication Date: 31-Mar-2026
Web References:
DOI: 10.48550/arXiv.2603.29545
Keywords
Cybersecurity, Cybercrime, Artificial Intelligence, Generative AI, Social Engineering, Botnets, AI Coding Assistants, Underground Forums, AI Security, Agentic AI, Automation, Chatbot Guardrails

