Do you AMI?

Introduction

Unfortunately nowadays more and more episodes of harassments against women arose and misogynistic comments can be found in social media, where misogynists hide themselves behind the security of the anonymity. Therefore, it is very important to identify misogyny in social media. Recent investigations studied how the misogyny phenomenon takes place, for example as unjustified slurring or as stereotyping of the role/body of a woman (i.e. the hashtag #getbacktokitchen), as described in the book by Poland [1]. A preliminary research work was conducted by Hewitt et al. [2] as first attempt of manually classification of misogynous tweets. So far it has not been carried out any attempt of automatically identifying misogynous content in social media. The same shared task has been organized in occasion of IberEval-2018.

Task Description

The AMI shared task proposes the automatic identification of misogynous content both in English and in Italian languages in Twitter. More specifically, it is a two-fold task, as it follows.

Task A: Misogyny Identification

Firstly, it is asked to classify the tweets binarily, that is as either misogynous or not misogynous.

Task B: Misogynistic Behaviour and Target Classification

Next, it is asked to classify the misogynous tweets according to both the misogynistic behaviour and the target of the message.

On one hand, the misogynistic behaviour is not multi-labels. A tweet must be classified uniquely within one of the following categories:

  • Stereotype & Objectification: a widely held but fixed and oversimplified image or idea of a woman; description of women’s physical appeal and/or comparisons to narrow standards.
  • Dominance: to assert the superiority of men over women to highlight gender inequality.
  • Derailing: to justify woman abuse, rejecting male responsibility; an attempt to disrupt the conversation in order to redirect  women’s conversations on something more comfortable for men.
  • Sexual Harassment & Threats of Violence: to describe actions as sexual advances, requests for sexual favors, harassment of a sexual nature; intent to physically assert power over women through threats of violence.
  • Discredit: slurring over women with no other larger intention.

On the other, the target classification is again binary:

  • Active (individual): the text includes offensive messages purposely sent to a specific target.
  • Passive (generic): it refers to messages posted to many potential receivers.

Reference

[1] Anzovino M., Fersini E., Rosso P. (2018) Automatic Identification and Classification of Misogynistic Language on Twitter. In: Silberztein M., Atigui F., Kornyshova E., Métais E., Meziane F. (eds) Natural Language Processing and Information Systems. NLDB 2018. Lecture Notes in Computer Science, vol 10859.

[2] Poland, B. (2016). Haters: Harassment, Abuse, and Violence Online. University of Nebraska Press.

[3] Hewitt, S., Tiropanis, T., & Bokhove, C. (2016). The problem of identifying misogynist language on Twitter (and other online social spaces). In Proceedings of the 8th ACM Conference on Web Science, pp. 333-335.