India Tightens Social Media Rules, Mandates Three-Hour Takedown of Unlawful Content from February 20
New Delhi, February 11, 2026 — The Union government has significantly tightened India’s online content regulations, mandating that social media platforms remove or disable access to unlawful content within three hours of receiving a government or court order. The amended rules, notified on Tuesday, will come into effect from February 20, replacing the earlier 36-hour compliance window under the Information Technology Rules, 2021.
The move marks one of the most stringent regulatory shifts for digital intermediaries in India and is expected to have far-reaching implications for major platforms operating in the country, including those with hundreds of millions of users. It also places India among the world’s most assertive regulators of online speech, at a time when governments globally are demanding greater accountability from technology companies.
What the New Rules Require
Under the amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, intermediaries must now act on takedown orders within three hours, a sharp reduction from both the earlier rules and the draft amendments that had proposed a 36-hour timeline.
The notification was issued by the Ministry of Electronics and Information Technology under provisions of the Information Technology Act, 2000. In addition to faster takedowns, the amendments also shorten timelines for grievance redressal, requiring platforms to acknowledge and act on user complaints more quickly.
The government has not provided a detailed explanation for the reduced timeline. However, officials have previously argued that faster action is necessary to curb the rapid spread of harmful content online, particularly during sensitive periods such as elections or social unrest.
Compliance Concerns from Industry Experts
Legal and technology experts have raised concerns about the feasibility of the new timeline. Akash Karmakar, a technology law specialist, described the three-hour requirement as “practically impossible” in many real-world situations, noting that content moderation often requires legal review, contextual assessment, and internal escalation.
Industry representatives, speaking anonymously, said the rule was introduced without extensive consultation and departs from international norms, which generally allow longer response times. They argue that such compressed timelines may lead platforms to err on the side of over-compliance, potentially affecting lawful speech.
Major platforms have so far offered limited public प्रतिक्रिया. Some companies declined to comment, while others did not immediately respond to requests for clarification on how they plan to operationalise the new requirements.
Expanded Framework for AI-Generated and Synthetic Content
Alongside the stricter takedown deadlines, the amendments introduce a clearer regulatory framework for synthetically generated content, including deepfakes. The rules define such material as audio, visual, or audio-visual content that is artificially created or altered in a way that appears real and is likely to be perceived as authentic.
Importantly, the final notification narrows the scope compared to earlier drafts. Routine editing, accessibility tools, and educational material are excluded, provided they do not create false electronic records or misrepresent people or events.
The changes follow a series of incidents involving AI-generated videos and audio clips that falsely depicted individuals, including public figures. Authorities have cited rising cases of non-consensual deepfake pornography, impersonation scams, and misleading content during elections as key concerns driving the new framework.
Labelling and Technical Safeguards
For lawful synthetic content, platforms are now required to ensure it is clearly and prominently labelled. Such content must also carry permanent metadata or other provenance markers, including a unique identifier. Users will not be allowed to remove or suppress these labels.
Significant social media intermediaries must require users to declare whether their content is synthetically generated. Platforms are expected to verify these declarations using technical measures and ensure that confirmed synthetic content is labelled before publication.
The rules also mandate that intermediaries deploy safeguards to prevent the circulation of unlawful material, including child sexual abuse content, non-consensual intimate imagery, false documents, and content related to explosives or arms.
Due Diligence and Legal Liability
The amendments strengthen the concept of “due diligence” for intermediaries. If a platform is found to have knowingly permitted, promoted, or failed to act on prohibited content, it may be deemed non-compliant with the rules, exposing it to legal consequences.
Platforms must also periodically inform users about the consequences of violations, such as content removal, account suspension, and potential liability under Indian law. The notification references statutes including the Bharatiya Nyaya Sanhita, the Protection of Children from Sexual Offences Act, and election-related laws.
Notably, the amendments update legal references to align with India’s new criminal codes, replacing older references to the Indian Penal Code.
Balancing Regulation and Free Expression
India has issued thousands of content takedown orders in recent years, according to transparency disclosures by social media companies. The country’s large and diverse online population—estimated at over one billion internet users—makes content governance particularly complex.
Digital rights advocates have cautioned that tighter timelines and broad takedown powers could raise concerns about censorship and the chilling of legitimate expression. At the same time, governments worldwide are under pressure to respond more decisively to online harms, from misinformation to AI-driven abuse.
What Lies Ahead
With the new rules set to take effect later this month, social media platforms face a narrow window to update internal processes, legal review mechanisms, and technical systems. How effectively the three-hour mandate can be implemented—and how it will be enforced—remains to be seen.
As India continues to shape its digital regulatory framework, the coming months are likely to test the balance between rapid enforcement, technological feasibility, and the protection of free expression in one of the world’s largest online markets.
