Monday, January 19, 2026
Science
No Result
View All Result
  • Login
  • HOME
  • SCIENCE NEWS
  • CONTACT US
  • HOME
  • SCIENCE NEWS
  • CONTACT US
No Result
View All Result
Scienmag
No Result
View All Result
Home Science News Social Science

Should AI be used in psychological research?

July 16, 2024
in Social Science
Reading Time: 3 mins read
0
Should AI be used in psychological research?
68
SHARES
614
VIEWS
Share on FacebookShare on Twitter
ADVERTISEMENT

Mohammad Atari and colleagues explore the promise and peril of using large language models (LLMs) in psychological research, beginning by urging researchers to also ask themselves whether and why they should use LLMs—not just how they should use them. The authors caution against using LLMs as a replacement for human participants, noting that LLMs cannot capture the substantial cross-cultural variation in cognition and moral judgement known to exist. Most LLMs have been trained on data primarily from WEIRD (Western, Educated, Industrialized, Rich, Democratic) sources, disproportionately in English. Additionally, although LLMs can produce a variety of responses to the same question, under this seeming variance is an algorithm that will produce the most statistically likely response most often and less likely responses at proportionately lower frequencies. Essentially, a LLM simulates a single “participant” rather than a group—a point the authors underline by showing a marked lack of variance when administering a broad range of self-report measures to LLMs. The authors also warn that LLMs are not a panacea for text analysis, especially where researchers are interested in implicit, emotional, moral, or context-dependent text. Additionally, the “black-box” nature of LLMs makes them unsuited to many research contexts and makes reproducing results impossible as the LLMs are updated and change. Finally, LLMs do not outperform older tools, such as small, fine-tuned language models on many tasks. The authors conclude that while LLMs can be useful in certain contexts, the hurried and unjustified application of LLMs for every possible task could put psychological research at risk at a time when the reproducibility crisis calls for careful attention to rigor and quality of research output. 

Mohammad Atari and colleagues explore the promise and peril of using large language models (LLMs) in psychological research, beginning by urging researchers to also ask themselves whether and why they should use LLMs—not just how they should use them. The authors caution against using LLMs as a replacement for human participants, noting that LLMs cannot capture the substantial cross-cultural variation in cognition and moral judgement known to exist. Most LLMs have been trained on data primarily from WEIRD (Western, Educated, Industrialized, Rich, Democratic) sources, disproportionately in English. Additionally, although LLMs can produce a variety of responses to the same question, under this seeming variance is an algorithm that will produce the most statistically likely response most often and less likely responses at proportionately lower frequencies. Essentially, a LLM simulates a single “participant” rather than a group—a point the authors underline by showing a marked lack of variance when administering a broad range of self-report measures to LLMs. The authors also warn that LLMs are not a panacea for text analysis, especially where researchers are interested in implicit, emotional, moral, or context-dependent text. Additionally, the “black-box” nature of LLMs makes them unsuited to many research contexts and makes reproducing results impossible as the LLMs are updated and change. Finally, LLMs do not outperform older tools, such as small, fine-tuned language models on many tasks. The authors conclude that while LLMs can be useful in certain contexts, the hurried and unjustified application of LLMs for every possible task could put psychological research at risk at a time when the reproducibility crisis calls for careful attention to rigor and quality of research output. 



Journal

PNAS Nexus

Article Title

Perils and opportunities in using large language models in psychological research

Article Publication Date

16-Jul-2024

Share27Tweet17
Previous Post

AI makes human-like reasoning mistakes

Next Post

Partisan politics and perceptions of immorality

Related Posts

blank
Social Science

Exploring Mental Health, Race, and Sentencing Bias

January 19, 2026
blank
Social Science

Impact of Faculty-Student Bonds on College Success

January 19, 2026
blank
Social Science

Delayed Vocabulary Growth in Bilingual Kids at ASD Risk

January 19, 2026
blank
Social Science

Remembering Robert Z. Aliber, Renowned International Economist

January 19, 2026
blank
Social Science

Trauma Therapy’s Effect on Disciplinary Infractions in Women

January 19, 2026
blank
Social Science

Ukrainian Scholars’ Global Unity Amid Conflict (2020-2023)

January 19, 2026
Next Post
Partisan politics and perceptions of immorality

Partisan politics and perceptions of immorality

  • Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    27602 shares
    Share 11038 Tweet 6899
  • University of Seville Breaks 120-Year-Old Mystery, Revises a Key Einstein Concept

    1011 shares
    Share 404 Tweet 253
  • Bee body mass, pathogens and local climate influence heat tolerance

    658 shares
    Share 263 Tweet 165
  • Researchers record first-ever images and data of a shark experiencing a boat strike

    527 shares
    Share 211 Tweet 132
  • Groundbreaking Clinical Trial Reveals Lubiprostone Enhances Kidney Function

    512 shares
    Share 205 Tweet 128
Science

Embark on a thrilling journey of discovery with Scienmag.com—your ultimate source for cutting-edge breakthroughs. Immerse yourself in a world where curiosity knows no limits and tomorrow’s possibilities become today’s reality!

RECENT NEWS

  • Glycerol 3-Phosphate Acyltransferase Worsens α-Synuclein Toxicity
  • Measuring Metacognitions and Problematic Social Media Use
  • Addiction Link: Materialism, Pleasure, Gambling, and Social Media
  • Stacked Ensemble Method Predicts Regional Sea Level Changes

Categories

  • Agriculture
  • Anthropology
  • Archaeology
  • Athmospheric
  • Biology
  • Biotechnology
  • Blog
  • Bussines
  • Cancer
  • Chemistry
  • Climate
  • Earth Science
  • Editorial Policy
  • Marine
  • Mathematics
  • Medicine
  • Pediatry
  • Policy
  • Psychology & Psychiatry
  • Science Education
  • Social Science
  • Space
  • Technology and Engineering

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 5,192 other subscribers

© 2025 Scienmag - Science Magazine

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • HOME
  • SCIENCE NEWS
  • CONTACT US

© 2025 Scienmag - Science Magazine

Discover more from Science

Subscribe now to keep reading and get access to the full archive.

Continue reading