Can LLMs fix the flaws in user reporting?
Large Language Models are being tested for everything from transparency to content review. But could they help modernise one of the oldest T&S processes — how users report harm and appeal moderation decisions?