The sixth edition of the CheckThat! Lab concluded successfully, registering participation from 127 teams.
Read MoreStay Updated with our latest news, activities, and research
Welcome to ArAIEval shared task at WANLP 2023!
This shared task is to further encourage work on computational propaganda and disinformation detection over Arabic content.
Read MoreNews Genre, Framing and Persuasion Techniques Detection using Multilingual Models
"News Genre, Framing, and Persuasion Techniques Detection using Multilingual Models" is scheduled to be presented at the17th International Workshop on Semantic Evaluation (SemEval)
Read MoreTeam from QCRI at SemEval-2023: “Task 3: News Genre, Framing and Persuasion Techniques Detection using Multilingual Models”
Scientists from QCRI participated in Tasks 3 that focus on addressing misinformation by identifying news genre, media framing, and persuasion techniques in news articles...
Read MoreCheckThat! Lab Shared Task at CLEF 2023
Tanbih team is co-organizing CheckThat! Lab Shared task at CLEF 2023 on the topic of Checkworthiness, Subjectivity, Political Bias, Factuality, and Authority of News...
Read MoreResearch Paper Accepted at EMNLP-2022
Scientists from QCRI published a paper titled: “Assisting the Human Fact-Checkers: Detecting All Previously Fact-Checked Claims in a Document, at EMNLP 2022”.
Read MoreQCRI Discusses the Findings of a Shared Task on Propaganda Detection in Arabic
The aim of the shared task is to build AI models to detect and identify those means or techniques using Arabic tweets as a...
Read MoreResearch Paper by Tanbih’s team Accepted at COLING-2022
Tanbih’s team publishes on fake news, propaganda, misinformation, and disinformation in online platforms.
Read MoreCLEF 2022 CheckThat! Lab: Advancing the Detection of Misinformation and Disinformation in Social Media
The CLEF 2022 CheckThat! Lab is scheduled to take place from September 5-8, 2022, and will focus on advancing the detection of misinformation and...
Read MoreA Research Paper is Accepted at IJCAI-2022 Titled “Detecting and Understanding Harmful Memes: A Survey”
Team from QCRI joined an international group of researchers in highlighting the automatic identification of harmful content that can be found online.
Read MoreResearch Paper Accepted at NAACL-2022 Titled “The Role of Context in Detecting Previously Fact-Checked Claims”
Tanbih’s team published a new paper studying the importance of modeling the context of the claims made in political debates.
Read MoreQCRI Holds the Artificial Intelligence for Collective Intelligence (AI4CI) Workshop
The Social Computing Group at QCRI brought together leading organizations and researchers in the field of data science for the Artificial Intelligence for Collective...
Read MoreDr. Preslav Nakov Participates in Conversations Highlighting Media Literacy and Disinformation
Dr. Preslav Nakov from QCRI commented on media literacy in “The EU Meets The Balkans Forum”.
Read MoreTalk About Tanbih at the 17th International Conference on Persuasive Technology
Dr Preslav Nakov, a principal scientist at QCRI participated in the 17th International Conference on Persuasive Technology.
Read More“FANG: Leveraging Social Context for Fake News Detection Using Graph Representation”
We are pleased to announce the publication of a significant research paper titled “FANG: leveraging social context for fake news detection using graph representation”...
Read More