As concerns grow over children’s use of social media and the risks accompanying it — exposure to harmful content, online grooming, cybercrime — India is finding itself relying on a patchwork of laws, regulatory frameworks and platform-led interventions to safeguard them.
But experts say enforcement gaps, technological loopholes and the ease with which children can misrepresent their age continue to pose challenges to the effectiveness of these safeguards.
The Union government, meanwhile, is learnt to be considering a graded approach to regulate children’s access to these platforms.
For now, India’s response spans multiple layers — legal safeguards such as the Digital Personal Data Protection Act, 2023, which mandates parental consent for processing children’s data, criminal provisions under laws such as the Information Technology Act and the Protection of Children from Sexual Offences (POCSO) Act, as well as platform-level measures like age-gating, parental controls and child-focused content ecosystems.
The risks for children online
Children who spend increasing hours online run the risk of being exposed to harmful content that can have a bearing on their mental health, and lead to anxiety and alienation. Grooming children on the internet is also a real risk.
Data from the National Crime Records Bureau (NCRB) shows cybercrime against children spiked by 32% between 2021 and 2022, even as internet use among young users continues to expand.
According to a report from last year by NITI Aayog, children aged up to five spent 1.5 hours online on average in 2023, accessing educational videos and games. Those between six to 10 years spent 2.5 hours online using services such as social media, gaming and videos. While 11-15 year olds spent four hours a day online, those between 16-18 spent as much as six hours daily on social media, online forums and shopping.
India’s regulatory framework for children on the internet
Story continues below this ad
India has developed a framework of regulatory measures, self-regulatory codes, and educational initiatives, though critics argue that enforcement can be lax.
Under the Digital Personal Data Protection Act, 2023, companies that collect the data of children – users under the age of 18 – must get their parent/ guardian’s consent. They also cannot track, monitor a child’s behaviour, or serve targeted ads directed to children. But it is widely believed that children would be able to get around this by simply misrepresenting their age.
According to a report prepared by the think tank Indian Governance and Policy Project in November 2025, the Information Technology Act, 2000, has provisions which criminalises the creation of child sexual abuse material, the POCSO Act, 2012, defines and penalises online sexual exploitation and grooming, the Bharatiya Nyaya Sanhita, 2023 extends liability to digital/online offences against children including trafficking and harassment, and the Juvenile Justice (Care and Protection of Children) Act, 2015 addresses online facilitation of child exploitation.
However, the report noted that “persistent weaknesses in digital forensic capacity, law-enforcement training, and the uneven functioning of Special POCSO Courts continue to limit the effective investigation and prosecution of offences”.
Story continues below this ad
Under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, platforms like Netflix, Disney+ Hotstar, Apple TV etc. need to classify the content they host into five age based categories – U (Universal), U/A 7+, U/A 13+, U/A 16+, and A (Adult). These platforms are required to implement parental locks for content classified as U/A 13+ or higher, and reliable age verification mechanisms for content classified as “A”.
The Ministry of Education (MoE) introduced the PRAGYATA Guidelines on July 14, 2020, which aim to ensure the safety and academic welfare of students by recommending age- appropriate screen time limits.
Some social media platforms already have age-gating
In India, 13 is the minimum age to create and manage any kind of a Google account. On Gmail, for instance, parents can create an account for their child below the age of 13 with ‘Family Link’ – a tool which allows setting up parental controls on Google services like Chrome, Play, YouTube, and Search. Family Link also allows parents to block inappropriate sites, require approval for new apps, and manage permissions. At age 13, users receive an email to “update” their account, allowing them to manage it themselves, though parents are still notified if supervision stops
On Meta-owned Instagram, another social media app popular among teens and young users, the service has a feature called ‘Teen Accounts’. The feature automatically categorises all teenagers’ accounts for added protections, and users under 16 will need a parent’s permission to change any content settings to be less strict.
Story continues below this ad
Tech companies also have children-only platforms. YouTube, for instance, has a separate kids platform which allows them to stream content in a “contained environment”. The service also allows parents to select their child’s age and to tailor the nature of content that would be visible to them. Instagram was also developing a kids-only app but Meta paused its development in 2021
It is worth noting though that none of these measures are foolproof. Last year, a study led by a former senior engineer at Meta who testified against the company before US Congress, found that two-thirds (64%) of new safety tools on Instagram advertised to protect children were found to be ineffective. Meta, though, said the report “misrepresented” its efforts.




