Tuesday, April 21, 2026
Science
No Result
View All Result
  • Login
  • HOME
  • SCIENCE NEWS
  • CONTACT US
  • HOME
  • SCIENCE NEWS
  • CONTACT US
No Result
View All Result
Scienmag
No Result
View All Result
Home Science News Social Science

Should AI be used in psychological research?

July 16, 2024
in Social Science
Reading Time: 3 mins read
0
Should AI be used in psychological research?
68
SHARES
616
VIEWS
Share on FacebookShare on Twitter
ADVERTISEMENT

Mohammad Atari and colleagues explore the promise and peril of using large language models (LLMs) in psychological research, beginning by urging researchers to also ask themselves whether and why they should use LLMs—not just how they should use them. The authors caution against using LLMs as a replacement for human participants, noting that LLMs cannot capture the substantial cross-cultural variation in cognition and moral judgement known to exist. Most LLMs have been trained on data primarily from WEIRD (Western, Educated, Industrialized, Rich, Democratic) sources, disproportionately in English. Additionally, although LLMs can produce a variety of responses to the same question, under this seeming variance is an algorithm that will produce the most statistically likely response most often and less likely responses at proportionately lower frequencies. Essentially, a LLM simulates a single “participant” rather than a group—a point the authors underline by showing a marked lack of variance when administering a broad range of self-report measures to LLMs. The authors also warn that LLMs are not a panacea for text analysis, especially where researchers are interested in implicit, emotional, moral, or context-dependent text. Additionally, the “black-box” nature of LLMs makes them unsuited to many research contexts and makes reproducing results impossible as the LLMs are updated and change. Finally, LLMs do not outperform older tools, such as small, fine-tuned language models on many tasks. The authors conclude that while LLMs can be useful in certain contexts, the hurried and unjustified application of LLMs for every possible task could put psychological research at risk at a time when the reproducibility crisis calls for careful attention to rigor and quality of research output. 

Mohammad Atari and colleagues explore the promise and peril of using large language models (LLMs) in psychological research, beginning by urging researchers to also ask themselves whether and why they should use LLMs—not just how they should use them. The authors caution against using LLMs as a replacement for human participants, noting that LLMs cannot capture the substantial cross-cultural variation in cognition and moral judgement known to exist. Most LLMs have been trained on data primarily from WEIRD (Western, Educated, Industrialized, Rich, Democratic) sources, disproportionately in English. Additionally, although LLMs can produce a variety of responses to the same question, under this seeming variance is an algorithm that will produce the most statistically likely response most often and less likely responses at proportionately lower frequencies. Essentially, a LLM simulates a single “participant” rather than a group—a point the authors underline by showing a marked lack of variance when administering a broad range of self-report measures to LLMs. The authors also warn that LLMs are not a panacea for text analysis, especially where researchers are interested in implicit, emotional, moral, or context-dependent text. Additionally, the “black-box” nature of LLMs makes them unsuited to many research contexts and makes reproducing results impossible as the LLMs are updated and change. Finally, LLMs do not outperform older tools, such as small, fine-tuned language models on many tasks. The authors conclude that while LLMs can be useful in certain contexts, the hurried and unjustified application of LLMs for every possible task could put psychological research at risk at a time when the reproducibility crisis calls for careful attention to rigor and quality of research output. 



Journal

PNAS Nexus

Article Title

Perils and opportunities in using large language models in psychological research

Article Publication Date

16-Jul-2024

Share27Tweet17
Previous Post

AI makes human-like reasoning mistakes

Next Post

Partisan politics and perceptions of immorality

Related Posts

blank
Social Science

Common Food Preservative Linked to Recent Rise in Suicide Rates in the UK

April 21, 2026
blank
Social Science

UT Arlington Professor Investigates the Growing Phenomenon of Global Megacities

April 20, 2026
blank
Social Science

New Tool Reveals AI’s Impact on Student Writing

April 20, 2026
blank
Social Science

Alliance for Clinical Trials in Oncology Champions April as Head and Neck Awareness Month

April 20, 2026
blank
Social Science

Research Finds Strategic Wikipedia Use Boosts Scientific Visibility, Revealing New Priorities for AI-Driven Information Access

April 20, 2026
blank
Social Science

Global Study Reveals How Data-Driven Education and Communication Propel Climate Action

April 20, 2026
Next Post
Partisan politics and perceptions of immorality

Partisan politics and perceptions of immorality

  • Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    27636 shares
    Share 11051 Tweet 6907
  • University of Seville Breaks 120-Year-Old Mystery, Revises a Key Einstein Concept

    1038 shares
    Share 415 Tweet 260
  • Bee body mass, pathogens and local climate influence heat tolerance

    676 shares
    Share 270 Tweet 169
  • Researchers record first-ever images and data of a shark experiencing a boat strike

    538 shares
    Share 215 Tweet 135
  • Groundbreaking Clinical Trial Reveals Lubiprostone Enhances Kidney Function

    525 shares
    Share 210 Tweet 131
Science

Embark on a thrilling journey of discovery with Scienmag.com—your ultimate source for cutting-edge breakthroughs. Immerse yourself in a world where curiosity knows no limits and tomorrow’s possibilities become today’s reality!

RECENT NEWS

  • New Study Uncovers How Coffee Benefits the Gut-Brain Axis
  • Addressing Critical Gaps in Pediatric Burn Care
  • Fast-Track Tree Breeding Revives European Ash Populations Across the Landscape
  • Mars Rover Discovers Novel Organic Compounds in Groundbreaking Experiment

Categories

  • Agriculture
  • Anthropology
  • Archaeology
  • Athmospheric
  • Biology
  • Biotechnology
  • Blog
  • Bussines
  • Cancer
  • Chemistry
  • Climate
  • Earth Science
  • Editorial Policy
  • Marine
  • Mathematics
  • Medicine
  • Pediatry
  • Policy
  • Psychology & Psychiatry
  • Science Education
  • Social Science
  • Space
  • Technology and Engineering

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 5,145 other subscribers

© 2025 Scienmag - Science Magazine

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • HOME
  • SCIENCE NEWS
  • CONTACT US

© 2025 Scienmag - Science Magazine

Discover more from Science

Subscribe now to keep reading and get access to the full archive.

Continue reading