AI Product, ActiveFence | Co-founder Rewire | Visiting Researcher Oxford University & Alan Turing Institute
Title: Now we have ChatGPT, do we really need to build AI to moderate content?!
Abstract: Foundation models demonstrate incredible Natural Language Understanding and are increasingly powerful and flexible. From OpenAI's GPT4 to open-source models like Falcon, Llama and Vicuna, these models can give State of the Art results even when used straight out-the-box or with very limited training (i.e. zero- and few- shot learning, and parameter-efficient finetuning). Many have questioned whether models trained solely to automatically detect, rate and action unsafe content are still needed. I argue that whilst the latest crop of foundation models have made it far easier to deploy high-quality AI they have not changed the need for proper evaluation and oversight, and still require carefultraining. Their main benefit lies in tackling other long-standing problems in AI content moderation, like scalably handling new threats as well as different languages and niche types of abuse.
Bio: Dr. Bertie Vidgen works in AI product at ActiveFence, the market leading provider of Trust and Safety solutions. He is also a visiting researcher at the Alan Turing Institute and the University of Oxford. Previously, he cofounded a startup (Rewire), which was acquired, and advised UK Parliament on Online Safety.