5 min read

Can LLMs fix the flaws in user reporting?

Large Language Models are being tested for everything from transparency to content review. But could they help modernise one of the oldest T&S processes — how users report harm and appeal moderation decisions?

Get access to the rest of this edition of EiM and 200+ others by becoming a paying member