Papers
Topics
Authors
Recent
Search
2000 character limit reached

But Who Protects the Moderators? The Case of Crowdsourced Image Moderation

Published 29 Apr 2018 in cs.HC | (1804.10999v4)

Abstract: Though detection systems have been developed to identify obscene content such as pornography and violence, artificial intelligence is simply not good enough to fully automate this task yet. Due to the need for manual verification, social media companies may hire internal reviewers, contract specialized workers from third parties, or outsource to online labor markets for the purpose of commercial content moderation. These content moderators are often fully exposed to extreme content and may suffer lasting psychological and emotional damage. In this work, we aim to alleviate this problem by investigating the following question: How can we reveal the minimum amount of information to a human reviewer such that an objectionable image can still be correctly identified? We design and conduct experiments in which blurred graphic and non-graphic images are filtered by human moderators on Amazon Mechanical Turk (AMT). We observe how obfuscation affects the moderation experience with respect to image classification accuracy, interface usability, and worker emotional well-being.

Citations (22)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.