top of page
Abstract Background

ANALYSIS OF USER BEHAVIOUR AND AWARENESS OF META’S CONTENT MODERATION IN INDIA

City Sky
merged file.jpg

At Centre for Advanced Studies in Cyber Law and Artificial Intelligence (CASCA), we recently carried out a survey called “Understanding User Awareness and Experience of Content Moderation on Meta Platforms in India.”

The idea was simple: to understand how people in India actually use Meta’s content moderation tools, like reporting options on Facebook and Instagram, and what they know about the Meta Oversight Board. We wanted to see where things work, where they don’t, and what stops users from trusting or accessing these systems.

The numbers tell a powerful story. While 93% of respondents knew about Meta’s reporting function, only 73.6% had ever used it. Even more striking, over 91% said they regularly encountered harmful content, yet just 34.8% felt their reports ever led to meaningful action. Trust gaps run deep: 64.2% doubted Meta would act on their reports, and more than half said they never even noticed updates on the status of their complaints.

When it came to the Meta Oversight Board, the “Supreme Court” of Meta's content moderation, the disconnect was sharper. Over 54% of respondents had never even heard of it, and only 15% knew they could actually appeal Meta’s decisions all the way to the Board.

Thanks to everyone who took the time to respond, we were able to identify real challenges that users face. We’re also grateful to Internet Freedom Foundation and Columbia Global Freedom of Expression for helping us reach a wider audience.

We’re now sharing the findings of this report, in the hope that it sparks more conversation and action toward making content moderation fairer, more transparent, and easier for everyone to use.

AUTHORED BY:

Tanmay Durani

Amishi Jain

R. Dayasakthi

Sanskriti Koirala

Uday Gupta

bottom of page