Large language models (LMs) can complete abstract reasoning tasks, but they are susceptible to many of the same types of mistakes made by humans. Andrew Lampinen, Ishita Dasgupta, and colleagues tested state-of-the-art LMs and humans on three kinds of reasoning tasks: natural language inference, judging the logical validity of syllogisms, and the Wason selection task. The authors found the LMs to be prone to similar content effects as humans. Both humans and LMs are more likely to mistakenly label an invalid argument as valid when the semantic content is sensical and believable. LMs are also just as bad as humans at the Wason selection task, in which the participant is presented with four cards with letters or numbers written on them (e.g., ‘D’, ‘F’, ‘3’, and ‘7’) and asked which cards they would need to flip over to verify the accuracy of a rule such as “if a card has a ‘D’ on one side, then it has a ‘3’ on the other side.” Humans often opt to flip over cards that do not offer any information about the validity of the rule but that test the contrapositive rule. In this example, humans would tend to choose the card labeled ‘3,’ even though the rule does not imply that a card with ‘3’ would have ‘D’ on the reverse. LMs make this and other errors but show a similar overall error rate to humans. Human and LM performance on the Wason selection task improves if the rules about arbitrary letters and numbers are replaced with socially relevant relationships, such as people’s ages and whether a person is drinking alcohol or soda. According to the authors, LMs trained on human data seem to exhibit some human foibles in terms of reasoning—and, like humans, may require formal training to improve their logical reasoning performance.
Credit: Lampinen et al
Large language models (LMs) can complete abstract reasoning tasks, but they are susceptible to many of the same types of mistakes made by humans. Andrew Lampinen, Ishita Dasgupta, and colleagues tested state-of-the-art LMs and humans on three kinds of reasoning tasks: natural language inference, judging the logical validity of syllogisms, and the Wason selection task. The authors found the LMs to be prone to similar content effects as humans. Both humans and LMs are more likely to mistakenly label an invalid argument as valid when the semantic content is sensical and believable. LMs are also just as bad as humans at the Wason selection task, in which the participant is presented with four cards with letters or numbers written on them (e.g., ‘D’, ‘F’, ‘3’, and ‘7’) and asked which cards they would need to flip over to verify the accuracy of a rule such as “if a card has a ‘D’ on one side, then it has a ‘3’ on the other side.” Humans often opt to flip over cards that do not offer any information about the validity of the rule but that test the contrapositive rule. In this example, humans would tend to choose the card labeled ‘3,’ even though the rule does not imply that a card with ‘3’ would have ‘D’ on the reverse. LMs make this and other errors but show a similar overall error rate to humans. Human and LM performance on the Wason selection task improves if the rules about arbitrary letters and numbers are replaced with socially relevant relationships, such as people’s ages and whether a person is drinking alcohol or soda. According to the authors, LMs trained on human data seem to exhibit some human foibles in terms of reasoning—and, like humans, may require formal training to improve their logical reasoning performance.
Journal
PNAS Nexus
Article Title
Language models, like humans, show content effects on reasoning tasks
Article Publication Date
16-Jul-2024
COI Statement
All authors are employed by Google DeepMind; J.L.M. is affiliated part-time.
Discover more from Science
Subscribe to get the latest posts sent to your email.