Thursday, December 4, 2025
Science
No Result
View All Result
  • Login
  • HOME
  • SCIENCE NEWS
  • CONTACT US
  • HOME
  • SCIENCE NEWS
  • CONTACT US
No Result
View All Result
Scienmag
No Result
View All Result
Home Science News Social Science

Should AI be used in psychological research?

July 16, 2024
in Social Science
Reading Time: 3 mins read
0
Should AI be used in psychological research?
67
SHARES
612
VIEWS
Share on FacebookShare on Twitter
ADVERTISEMENT

Mohammad Atari and colleagues explore the promise and peril of using large language models (LLMs) in psychological research, beginning by urging researchers to also ask themselves whether and why they should use LLMs—not just how they should use them. The authors caution against using LLMs as a replacement for human participants, noting that LLMs cannot capture the substantial cross-cultural variation in cognition and moral judgement known to exist. Most LLMs have been trained on data primarily from WEIRD (Western, Educated, Industrialized, Rich, Democratic) sources, disproportionately in English. Additionally, although LLMs can produce a variety of responses to the same question, under this seeming variance is an algorithm that will produce the most statistically likely response most often and less likely responses at proportionately lower frequencies. Essentially, a LLM simulates a single “participant” rather than a group—a point the authors underline by showing a marked lack of variance when administering a broad range of self-report measures to LLMs. The authors also warn that LLMs are not a panacea for text analysis, especially where researchers are interested in implicit, emotional, moral, or context-dependent text. Additionally, the “black-box” nature of LLMs makes them unsuited to many research contexts and makes reproducing results impossible as the LLMs are updated and change. Finally, LLMs do not outperform older tools, such as small, fine-tuned language models on many tasks. The authors conclude that while LLMs can be useful in certain contexts, the hurried and unjustified application of LLMs for every possible task could put psychological research at risk at a time when the reproducibility crisis calls for careful attention to rigor and quality of research output. 

Mohammad Atari and colleagues explore the promise and peril of using large language models (LLMs) in psychological research, beginning by urging researchers to also ask themselves whether and why they should use LLMs—not just how they should use them. The authors caution against using LLMs as a replacement for human participants, noting that LLMs cannot capture the substantial cross-cultural variation in cognition and moral judgement known to exist. Most LLMs have been trained on data primarily from WEIRD (Western, Educated, Industrialized, Rich, Democratic) sources, disproportionately in English. Additionally, although LLMs can produce a variety of responses to the same question, under this seeming variance is an algorithm that will produce the most statistically likely response most often and less likely responses at proportionately lower frequencies. Essentially, a LLM simulates a single “participant” rather than a group—a point the authors underline by showing a marked lack of variance when administering a broad range of self-report measures to LLMs. The authors also warn that LLMs are not a panacea for text analysis, especially where researchers are interested in implicit, emotional, moral, or context-dependent text. Additionally, the “black-box” nature of LLMs makes them unsuited to many research contexts and makes reproducing results impossible as the LLMs are updated and change. Finally, LLMs do not outperform older tools, such as small, fine-tuned language models on many tasks. The authors conclude that while LLMs can be useful in certain contexts, the hurried and unjustified application of LLMs for every possible task could put psychological research at risk at a time when the reproducibility crisis calls for careful attention to rigor and quality of research output. 



Journal

PNAS Nexus

Article Title

Perils and opportunities in using large language models in psychological research

Article Publication Date

16-Jul-2024

Share27Tweet17
Previous Post

AI makes human-like reasoning mistakes

Next Post

Partisan politics and perceptions of immorality

Related Posts

blank
Social Science

Comparative Study of Executive Function in Young Children

December 3, 2025
blank
Social Science

AI’s Unequal Effects on Housing Values by Education

December 2, 2025
blank
Social Science

Teachers’ Views on Transition from Preschool to Primary

December 2, 2025
blank
Social Science

Gender and Sexual Identity: Stability, Change, and Norms

December 2, 2025
blank
Social Science

Dynamic Collaboration Shapes Crisis Communication: Systems Insight

December 2, 2025
blank
Social Science

Critical Pedagogy in Turkish Teacher Training: Case Study

December 2, 2025
Next Post
Partisan politics and perceptions of immorality

Partisan politics and perceptions of immorality

  • Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    27587 shares
    Share 11032 Tweet 6895
  • University of Seville Breaks 120-Year-Old Mystery, Revises a Key Einstein Concept

    995 shares
    Share 398 Tweet 249
  • Bee body mass, pathogens and local climate influence heat tolerance

    652 shares
    Share 261 Tweet 163
  • Researchers record first-ever images and data of a shark experiencing a boat strike

    522 shares
    Share 209 Tweet 131
  • Groundbreaking Clinical Trial Reveals Lubiprostone Enhances Kidney Function

    490 shares
    Share 196 Tweet 123
Science

Embark on a thrilling journey of discovery with Scienmag.com—your ultimate source for cutting-edge breakthroughs. Immerse yourself in a world where curiosity knows no limits and tomorrow’s possibilities become today’s reality!

RECENT NEWS

  • Boosting Cancer Immunotherapy by Targeting DNA Repair
  • Addressing Dumpsite Risks: A Action Framework for LMICs
  • Evaluating eGFR Equations in Chinese Children
  • Global Guidelines for Shared Decision-Making in Valvular Heart Disease

Categories

  • Agriculture
  • Anthropology
  • Archaeology
  • Athmospheric
  • Biology
  • Blog
  • Bussines
  • Cancer
  • Chemistry
  • Climate
  • Earth Science
  • Marine
  • Mathematics
  • Medicine
  • Pediatry
  • Policy
  • Psychology & Psychiatry
  • Science Education
  • Social Science
  • Space
  • Technology and Engineering

Subscribe to Blog via Email

Success! An email was just sent to confirm your subscription. Please find the email now and click 'Confirm Follow' to start subscribing.

Join 5,191 other subscribers

© 2025 Scienmag - Science Magazine

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • HOME
  • SCIENCE NEWS
  • CONTACT US

© 2025 Scienmag - Science Magazine