Illustration: The Verge

OpenAI is convinced that its technology can help solve one of tech’s hardest problems: content moderation at scale. GPT-4 could replace tens of thousands of human moderators while being nearly as accurate and more consistent, claims OpenAI. If that’s true, the most toxic and mentally taxing tasks in tech could be outsourced to machines.

In a blog post, OpenAI claims that it has already been using GPT-4 for developing and refining its own content policies, labeling content, and making decisions. “I want to see more people operating their trust and safety, and moderation [in] this way,” OpenAI head of safety systems Lilian Weng told Semafor. “This is a really good step forward in how we use AI to solve real world issues in a way that’s…

Continue reading…

By