In a significant policy shift, India has introduced new regulations demanding social media companies to remove unlawful material within three hours of notification, a drastic reduction from the previously established 36-hour deadline.

The new rules, effective from February 20, will impact major platforms including Meta, YouTube, and X, and extend to AI-generated content as well.

The Indian government has yet to explain the rationale behind the shortened takedown timeframe; however, critics fear it may signal a tightening grip on online discourse and lead to increased censorship in the country, which boasts over a billion internet users.

In recent years, authorities have utilized existing Information Technology regulations to demand the removal of content deemed illegal, citing national security and public order as justifications. Reports indicate that over 28,000 URLs were blocked in 2024 following government mandates.

Concerns have been raised by experts about the practicality of enforcing these new rules. Digital rights advocates and technology analysts argue that the expedited timeline could compel platforms to rely heavily on automated systems, ultimately diminishing human oversight and fostering an environment of indiscriminate censorship.

Additionally, the new regulations will require platforms sharing AI-generated content to indicate such material clearly, with provisions for employing automated tools to detect and prevent illegal AI content.

As the amendments unfold, the Internet Freedom Foundation has described the new regime as potentially the most extreme takedown regime in any democracy, emphasizing that compliance will be challenging without significant automation and minimal human review. Analysts suggest that unless careful measures are prioritized, this could lead to widespread misinformation due to rapid content removal protocols.

The BBC has reached out to the Indian government for further clarification regarding these new rules and their implications.