Team from QCRI joined an international group of researchers in highlighting the automatic identification of harmful content that can be found online. Although many studies focused on this important matter, this study is a comprehensive survey that focuses on harmful memes. Below is the abstract of the paper:
“The automatic identification of harmful content online is of major concern for social media platforms, policymakers, and society. Researchers have studied textual, visual, and audio content, but typically in isolation. Yet, harmful content often combines multiple modalities, as in the case of memes. With this in mind, here we offer a comprehensive survey with a focus on harmful memes. Based on a systematic analysis of recent literature, we first propose a new typology of harmful memes, and then we highlight and summarize the relevant state of the art. One interesting finding is that many types of harmful memes are not really studied, e.g., featuring self-harm and extremism, partly due to the lack of suitable datasets. We further find that existing datasets mostly capture multiclass scenarios, which are not inclusive of the affective spectrum that
memes can represent. Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual, blending different cultures. We conclude by highlighting several challenges related to multimodal semiotics, technological constraints, and
non-trivial social engagement, and we present several open-ended aspects such as delineating online harm and empirically examining related frameworks
and assistive interventions, which we believe will motivate and drive future research.”
To read the full paper, please visit: https://www.ijcai.org/proceedings/2022/0781.pdf