Wednesday, March 4, 2026
Science
No Result
View All Result
  • Login
  • HOME
  • SCIENCE NEWS
  • CONTACT US
  • HOME
  • SCIENCE NEWS
  • CONTACT US
No Result
View All Result
Scienmag
No Result
View All Result
Home Science News Social Science

Should AI be used in psychological research?

July 16, 2024
in Social Science
Reading Time: 3 mins read
0
Should AI be used in psychological research?
68
SHARES
616
VIEWS
Share on FacebookShare on Twitter
ADVERTISEMENT

Mohammad Atari and colleagues explore the promise and peril of using large language models (LLMs) in psychological research, beginning by urging researchers to also ask themselves whether and why they should use LLMs—not just how they should use them. The authors caution against using LLMs as a replacement for human participants, noting that LLMs cannot capture the substantial cross-cultural variation in cognition and moral judgement known to exist. Most LLMs have been trained on data primarily from WEIRD (Western, Educated, Industrialized, Rich, Democratic) sources, disproportionately in English. Additionally, although LLMs can produce a variety of responses to the same question, under this seeming variance is an algorithm that will produce the most statistically likely response most often and less likely responses at proportionately lower frequencies. Essentially, a LLM simulates a single “participant” rather than a group—a point the authors underline by showing a marked lack of variance when administering a broad range of self-report measures to LLMs. The authors also warn that LLMs are not a panacea for text analysis, especially where researchers are interested in implicit, emotional, moral, or context-dependent text. Additionally, the “black-box” nature of LLMs makes them unsuited to many research contexts and makes reproducing results impossible as the LLMs are updated and change. Finally, LLMs do not outperform older tools, such as small, fine-tuned language models on many tasks. The authors conclude that while LLMs can be useful in certain contexts, the hurried and unjustified application of LLMs for every possible task could put psychological research at risk at a time when the reproducibility crisis calls for careful attention to rigor and quality of research output. 

Mohammad Atari and colleagues explore the promise and peril of using large language models (LLMs) in psychological research, beginning by urging researchers to also ask themselves whether and why they should use LLMs—not just how they should use them. The authors caution against using LLMs as a replacement for human participants, noting that LLMs cannot capture the substantial cross-cultural variation in cognition and moral judgement known to exist. Most LLMs have been trained on data primarily from WEIRD (Western, Educated, Industrialized, Rich, Democratic) sources, disproportionately in English. Additionally, although LLMs can produce a variety of responses to the same question, under this seeming variance is an algorithm that will produce the most statistically likely response most often and less likely responses at proportionately lower frequencies. Essentially, a LLM simulates a single “participant” rather than a group—a point the authors underline by showing a marked lack of variance when administering a broad range of self-report measures to LLMs. The authors also warn that LLMs are not a panacea for text analysis, especially where researchers are interested in implicit, emotional, moral, or context-dependent text. Additionally, the “black-box” nature of LLMs makes them unsuited to many research contexts and makes reproducing results impossible as the LLMs are updated and change. Finally, LLMs do not outperform older tools, such as small, fine-tuned language models on many tasks. The authors conclude that while LLMs can be useful in certain contexts, the hurried and unjustified application of LLMs for every possible task could put psychological research at risk at a time when the reproducibility crisis calls for careful attention to rigor and quality of research output. 



Journal

PNAS Nexus

Article Title

Perils and opportunities in using large language models in psychological research

Article Publication Date

16-Jul-2024

Share27Tweet17
Previous Post

AI makes human-like reasoning mistakes

Next Post

Partisan politics and perceptions of immorality

Related Posts

blank
Social Science

New USF Study Reveals Fans Prioritize Ethics Above Innovation in AI Hologram Concerts

March 4, 2026
blank
Social Science

Farm-Scale Biochar Technology Promises 75% Reduction in Agricultural Emissions and Atmospheric Carbon Removal

March 3, 2026
blank
Social Science

Next-Generation Metabolic Theory Proposes Glycolytic ATP Decline as a Key Factor in Lifespan Limitation

March 3, 2026
blank
Social Science

Increased Social Tolerance in Macaques Linked to Larger Brain Structure Volume

March 3, 2026
blank
Social Science

New Study Reveals Romance and Sexual Intimacy Thrive at Any Age

March 3, 2026
blank
Social Science

Gaming time reduced by 30% due to gray screens and loading delays

March 3, 2026
Next Post
Partisan politics and perceptions of immorality

Partisan politics and perceptions of immorality

  • Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    27619 shares
    Share 11044 Tweet 6903
  • University of Seville Breaks 120-Year-Old Mystery, Revises a Key Einstein Concept

    1023 shares
    Share 409 Tweet 256
  • Bee body mass, pathogens and local climate influence heat tolerance

    665 shares
    Share 266 Tweet 166
  • Researchers record first-ever images and data of a shark experiencing a boat strike

    533 shares
    Share 213 Tweet 133
  • Groundbreaking Clinical Trial Reveals Lubiprostone Enhances Kidney Function

    518 shares
    Share 207 Tweet 130
Science

Embark on a thrilling journey of discovery with Scienmag.com—your ultimate source for cutting-edge breakthroughs. Immerse yourself in a world where curiosity knows no limits and tomorrow’s possibilities become today’s reality!

RECENT NEWS

  • Nanostructure Engineering Unlocks Next-Gen Multi-Dimensional Camouflage
  • Nanoparticle Gradients Drive Aerosol Initial Growth
  • Injectable “Satellite Livers” Present Promising Alternative to Liver Transplantation
  • NIH Grant Fuels Breakthroughs in Lupus Protein Research

Categories

  • Agriculture
  • Anthropology
  • Archaeology
  • Athmospheric
  • Biology
  • Biotechnology
  • Blog
  • Bussines
  • Cancer
  • Chemistry
  • Climate
  • Earth Science
  • Editorial Policy
  • Marine
  • Mathematics
  • Medicine
  • Pediatry
  • Policy
  • Psychology & Psychiatry
  • Science Education
  • Social Science
  • Space
  • Technology and Engineering

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 5,190 other subscribers

© 2025 Scienmag - Science Magazine

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • HOME
  • SCIENCE NEWS
  • CONTACT US

© 2025 Scienmag - Science Magazine

Discover more from Science

Subscribe now to keep reading and get access to the full archive.

Continue reading