Welcome to part 1 of our deep dive into tech platform policy around countering terrorism and violent extremism. There’s a slightly different format this week, as we first chat to Charlotte Willner who gives us some context into the evolution of moderating terrorist content before we get a tech platform perspective on how these challenges look today.
In part 1, Anne Craanen speaks to Charlotte Willner who’s the Executive Director at the Trust & Safety Professional Association. Charlotte was one of Facebook’s early content moderators and went on to build and lead the company's first Safety Operations team. She gives unique insight into the early challenges Facebook encountered in developing policies to counter terrorist content, how these content moderation challenges have evolved since, as well as inevitable dilemmas human moderators face in the field of Trust and Safety.
Come back for part 2, where we speak to Jessica Mason of Clubhouse and Josh Parecki of Zoom to get a tech platform perspective on what informs their counterterrorism policies, why each platform faces different challenges in implementing their policies, and what’s being done to support tech companies navigate these difficult Trust and Safety challenges.
To find out more about Tech Against Terrorism and our work, visit techagainstterrorism.org or follow us on Twitter @techvsterrorism, where you can find resources on this topic.
You can learn more about the Trust & Safety Professional Association here.