Georgetown’s Center for Security and Emerging Technology (CSET)
Title: Thinking through Disinformation and Other Malicious Uses of Language Models
Abstract: Recent capability improvements and widespread diffusion of generative AI systems has increased the risk of malicious use of language models, including for disinformation campaigns. In this talk, we will overview why language models could be useful for influence operations—building on a workshop report and original survey experiments. We will also use disinformation risks as a case study to consider broader challenges related to malicious use. These include challenges in forecasting or weighing unrealized harms, limitations for possible mitigation strategies, and trade-offs for different release options for AI systems.
Bio: Josh A. Goldstein is a Research Fellow at Georgetown’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Prior to joining CSET, he was a pre- and postdoctoral fellow at the Stanford Internet Observatory. His research has included investigating covert influence operations on social media platforms, studying the effects of foreign interference on democratic societies, and exploring how emerging technologies will impact the future of propaganda campaigns. He holds an MPhil and DPhil in International Relations from the University of Oxford, where he studied as a Clarendon Scholar, and an A.B. in Government from Harvard College.