Marcos Zampieri

ARTICLE: Annotator Reliability Through In-Context Learning featured image

ARTICLE: Annotator Reliability Through In-Context Learning

Using LLMs to identify high-quality human annotators by checking if their labels are consistent with AI predictions—helping build better training data while preserving diverse …

Sujan Dutta
Read more

Rater Cohesion and Quality from a Vicarious Perspective

Asking people to predict how others with different political views would label content reveals hidden biases and improves data quality for content moderation AI.

Deepak Pandita
Read more
Vicarious Offense and Noise Audit of Offensive Speech Classifiers: Unifying Human and Machine Disagreement on What is Offensive featured image

Vicarious Offense and Noise Audit of Offensive Speech Classifiers: Unifying Human and Machine Disagreement on What is Offensive

We ran a massive experiment: 9 different AI content moderation systems analyzed 92 million YouTube comments about US politics. The results were shocking—different AI systems …

Tharindu Cyril Weerasooriya
Read more

Vicarious Offense and Noise Audit of Offensive Speech Classifiers

Testing 9 different AI content moderation systems on 92 million YouTube comments reveals wildly inconsistent results, while human annotators show strong political bias—proving that …

Tharindu Cyril Weerasooriya
Read more