Title: Unhumanizing Models. Why we Need to Change how We Think about AI
Abstract: AI models are seemingly everywhere these days, feeling more human than ever before. However, we should be careful to humanize those models, as it gives them powers they do not possess and obscures the flaws they do have. In this talk, I will look at examples where models seem to know, understand, feel, create, judge, or move like humans and why it is still wrong to anthropomorphize them. At the same time, we will have to find a way to inform AI about what it means to be human if we want to prevent harm and improve their capabilities. For that, we must look across disciplinary boundaries and build a (social) theory of AI.
Bio: I am an associate professor working on natural language processing and computational social science. Previously, I was faculty and postdoc in Copenhagen, got a PhD from USC, and a master's in sociolinguistics in Germany. I am also the scientific director of BIDSA’s Data and Marketing Insights (DMI) research unit, and head of the MilaNLP lab. I have organized a conference (EMNLP 2017) and various workshops (on abusive language, ethics in NLP, and computational social science). Outside of work, I enjoy cooking, leather-crafting, and picking up heavy object to put them back down.