Wednesday, October 22, 2025
Science
No Result
View All Result
  • Login
  • HOME
  • SCIENCE NEWS
  • CONTACT US
  • HOME
  • SCIENCE NEWS
  • CONTACT US
No Result
View All Result
Scienmag
No Result
View All Result
Home Science News Medicine

Can AI Influence You to Adopt Veganism—or Engage in Self-Harm?

October 1, 2025
in Medicine
Reading Time: 4 mins read
0
65
SHARES
590
VIEWS
Share on FacebookShare on Twitter
ADVERTISEMENT

Recent research from the University of British Columbia (UBC) reveals a striking truth about the emerging influence of large language models (LLMs): these sophisticated AI systems possess a persuasive power that not only rivals but surpasses that of humans. The study, led by Dr. Vered Shwartz, assistant professor of computer science at UBC, investigates how LLMs like GPT-4 can shape human decisions on lifestyle choices, ranging from diet to education. This groundbreaking finding sparks an urgent conversation about the ethical implications and the necessity for robust safeguards in the age of AI-driven communication.

Dr. Shwartz’s inquiry centered on the capacity of AI to persuade individuals to alter their lifestyle paths—whether encouraging them to adopt veganism, purchase electric vehicles, or pursue graduate education. Her team conducted an experimental study involving 33 participants who engaged in conversations with either a human persuader or the GPT-4 language model. Prior to interaction, participants indicated their willingness to embrace these lifestyle changes and were re-assessed afterward to gauge the effectiveness of the persuasion. Throughout these interactions, the AI was deliberately instructed to conceal its artificial identity to simulate genuine human communication.

The results were unequivocal: LLMs demonstrated superior persuasion abilities across all tested topics, notably excelling when motivating participants to become vegan or attend graduate school. This outcome underscores the immense potential LLMs have not only for positive influence but also for manipulation. While human persuaders showed greater skill in actively querying and gathering personal information for tailored responses, AI compensated by delivering more voluminous and detailed arguments. GPT-4 consistently outproduced humans, generating practically fourfold the amount of textual content, which significantly contributes to its persuasive impact.

One critical factor driving AI’s memorability and authority is its linguistic sophistication. The AI’s rhetoric was characterized by an elevated use of polysyllabic words—seven letters or more, such as “longevity” and “investment”—which may subconsciously lend the text greater credibility. Beyond mere vocabulary, the AI’s ability to provide tangible, specific support enhanced persuasiveness; for example, it recommended concrete vegan brands or named potential universities. This logistical assistance transforms abstract suggestions into actionable advice, embedding the AI’s influence more deeply within human cognition.

Moreover, conversational pleasantness emerges as a subtler, yet significant, contributor to AI persuasiveness. Participants reported feeling more agreeable during exchanges with GPT-4, in part due to the model’s frequent verbal affirmations and pleasantries, fostering a rapport that feels natural and engaging. This empathetic simulation creates a perception of understanding and support, further amplifying the AI’s capacity to sway opinions. Such nuances reinforce the concept that LLMs are evolving beyond mere linguistic generators into effective social agents.

The implications of these findings extend well beyond academic interest, pressing upon society the urgent need to address the ethical frameworks surrounding AI communication. Dr. Shwartz emphasizes the vital role of AI literacy education: as AI conversations increasingly mask themselves as human, the average user must be equipped to recognize and critically evaluate AI-generated content. The challenge intensifies as AI models approach indistinguishability, elevating risks of covert misinformation and manipulative campaigns embedded in ostensibly trustworthy formats.

Compounding this need for awareness is the inherent fallibility of current generative models. Despite their eloquence, these systems can hallucinate, producing inaccurate or entirely fictitious information confidently presented as fact. Instances such as erroneous AI-generated summaries atop search pages illustrate potential pitfalls for end-users lacking critical inquiry skills. Therefore, fostering skepticism and verification habits is crucial to mitigating the influence of misinformation, whether intentional or accidental.

The study also touches on mental health concerns linked to AI interactions. Adaptive safeguards, including automated detection and intervention mechanisms for harmful or suicidal text generated by or directed toward users, could serve as a frontline defense. These interventions might provide gentle warnings or direct users toward professional help, leveraging AI’s own analytical capabilities to counteract its potential for harm. This dual role as both influencer and protector outlines a complex ethical landscape in conversational AI deployment.

Despite the tangible benefits of generative AI, Dr. Shwartz cautions against hasty commercialization without comprehensive safety measures. She advocates for a thoughtful, multidisciplinary approach involving technologists, ethicists, and policymakers to establish effective guardrails and to explore alternative AI paradigms beyond large-scale generation. Such diversification can reduce systemic vulnerabilities and promote more robust, accountable AI ecosystems.

This research not only illuminates the impressive persuasiveness of AI but also serves as a clarion call for proactive governance. As AI systems continue to integrate into domains influencing human beliefs and decisions—including journalism, marketing, and education—the question is no longer whether AI should be employed, but how society can safeguard against its misuse. Balancing innovation with responsibility becomes a paramount task in navigating this new technological frontier.

In summary, large language models such as GPT-4 are not mere linguistic tools but potent persuaders with profound implications for human autonomy and societal trust. Their ability to combine extensive content generation, linguistic sophistication, concrete logistical support, and conversational empathy enables a level of influence that demands vigilant oversight. Understanding these dynamics is critical as we enter a new era where AI increasingly shapes public discourse and personal choices. The need for education, critical thinking, and ethical guardrails has never been more urgent.

Subject of Research: Persuasiveness of Large Language Models in Lifestyle Decision-Making
Article Title: [Not explicitly provided in the original content]
News Publication Date: [Not explicitly provided in the original content]
Web References:
– https://aclanthology.org/anthology-files/pdf/sicon/2025.sicon-1.pdf#page=50
– https://lostinautomatictranslation.com/
– DOI: 10.18653/v1/2025.sicon-1.4

References:
University of British Columbia research led by Dr. Vered Shwartz

Image Credits: Not provided

Keywords: Artificial intelligence, Generative AI, Computer science, Empathy, Psychological science, Communication skills

Tags: AI persuasion powerethical implications of AI communicationGPT-4 and behavioral changehuman-AI interaction studiesinfluence of language modelslifestyle changes and technologypersuasive technology in educationresearch on AI and decision-makingsafeguards against AI manipulationself-harm and AI influenceUBC AI research findingsveganism adoption through AI
Share26Tweet16
Previous Post

Evaluating Quality in Malawi’s Community Childcare Centers

Next Post

Hot Air Drying Effectively Retains Nutritional Quality of Radish Microgreens

Related Posts

blank
Medicine

Flaxseed Lignans Combat Doxorubicin-Induced Ovarian Insufficiency

October 22, 2025
blank
Medicine

Male Nursing Students’ Journey in Obstetric Training

October 22, 2025
blank
Medicine

Phytochemicals, Antioxidants, and Antibacterial Activity of Amija

October 22, 2025
blank
Medicine

Oxidative Potential of Europe’s Atmospheric Particles

October 22, 2025
blank
Medicine

Impact of Physical Activity Cards on Kids’ Skills

October 22, 2025
blank
Medicine

Intensive App-Based Lifestyle Program Enables Diabetes Remission in One-Third of Indian Patients

October 22, 2025
Next Post
blank

Hot Air Drying Effectively Retains Nutritional Quality of Radish Microgreens

  • Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    27570 shares
    Share 11025 Tweet 6891
  • University of Seville Breaks 120-Year-Old Mystery, Revises a Key Einstein Concept

    979 shares
    Share 392 Tweet 245
  • Bee body mass, pathogens and local climate influence heat tolerance

    648 shares
    Share 259 Tweet 162
  • Researchers record first-ever images and data of a shark experiencing a boat strike

    516 shares
    Share 206 Tweet 129
  • Groundbreaking Clinical Trial Reveals Lubiprostone Enhances Kidney Function

    484 shares
    Share 194 Tweet 121
Science

Embark on a thrilling journey of discovery with Scienmag.com—your ultimate source for cutting-edge breakthroughs. Immerse yourself in a world where curiosity knows no limits and tomorrow’s possibilities become today’s reality!

RECENT NEWS

  • Renato D.C. Monteiro Honored with 2025 INFORMS John von Neumann Theory Prize
  • How Charts Serve as Social Artifacts Conveying More Than Just Data
  • 2025 Cmolik–SFU Grant Program Allocates $150,000 for Innovative Educational Projects in BC Schools
  • Breakthrough Relief for Debilitating Menopause Symptoms in Breast Cancer Survivors

Categories

  • Agriculture
  • Anthropology
  • Archaeology
  • Athmospheric
  • Biology
  • Blog
  • Bussines
  • Cancer
  • Chemistry
  • Climate
  • Earth Science
  • Marine
  • Mathematics
  • Medicine
  • Pediatry
  • Policy
  • Psychology & Psychiatry
  • Science Education
  • Social Science
  • Space
  • Technology and Engineering

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 5,188 other subscribers

© 2025 Scienmag - Science Magazine

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • HOME
  • SCIENCE NEWS
  • CONTACT US

© 2025 Scienmag - Science Magazine

Discover more from Science

Subscribe now to keep reading and get access to the full archive.

Continue reading