About this Event
This Behavioural Research Lab seminar will be hosted by Doctor Dermot Lynott, Associate Professor in Psychology at Maynooth University. An online option is also available, if you would like to join online please contact [email protected].
Prejudicial attitudes, such as those relating to age, race, or gender, exert a powerful influence on individuals, and are pervasive throughout society. Recent research suggests that the statistical patterns of how words are used in language may capture such biases, with language models providing approximations for people’s linguistic experience. However, many questions on the links between language models and people’s biased attitudes remain unanswered. In the current study we focus on gender–career bias (where men are routinely favoured over women in the workplace) to examine the extent to which language models can be used to model behavioural responses in the Gender–Career Implicit Association Test (IAT). We provide a systematic evaluation of a range of language models, including n-gram, count vector, predict, and Large Language Models (LLMs), to determine how well they capture people’s behaviour in the IAT. We examined data from over 800,000 participants, tested against over 600 language model variants. While we find that LLMs perform well in modelling IAT responses, they are not significantly better than simpler count vector and predict models, with these other models actually providing better fits to the behavioural data using Bayesian estimates. Our findings suggest that societal biases may be encoded in language, but that resource-greedy large language models are not necessary for their detection.
Event Venue & Nearby Stays
Penthouse, Alliance Manchester Business School, Booth Street West, Manchester, United Kingdom
USD 0.00