Australia’s internet regulator has criticised the world’s biggest social platforms of not adequately implementing the country’s ban on under-16s using their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has raised “serious concerns” about adherence by Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including permitting prohibited users to make repeated attempts at age verification and inadequate safeguards to stop new account creation. In its initial compliance assessment since the ban took effect, the regulator found numerous deficiencies and has now moved from monitoring to active enforcement, warning that platforms must show they have put in place “appropriate systems and processes” to prevent children under 16 from accessing their services.
Compliance Failures Exposed in First Major Review
Australia’s eSafety Commissioner has documented a troubling pattern of non-compliance among the world’s largest social media platforms in her inaugural review since the ban took effect on 10 December. The report reveals that Meta, Snap, TikTok, YouTube and Snapchat have collectively failed to implement appropriate safeguards to prevent minors from using their services. Julie Inman Grant raised significant concerns about systemic weaknesses in age verification processes, highlighting that some platforms have allowed children who originally stated themselves under 16 to later assert they were older, thereby undermining the law’s intent.
The findings demonstrate a notable intensification in the regulatory response, with the eSafety Commissioner moving beyond monitoring to active enforcement. The regulator has stressed that simply showing some children still maintain accounts is insufficient; platforms must instead provide concrete evidence that they have put in place comprehensive systems and procedures designed to prevent under-16s from opening accounts in the outset. This shift reflects the government’s determination to hold tech giants accountable, with potential penalties looming for companies that fail to meet the statutory obligations.
- Allowing previously banned users to confirm again their age and regain account access
- Enabling multiple tries at the same age assurance method without consequences
- Insufficient mechanisms to stop accounts for under-16s from being established
- Limited notification systems for parents and the general public
- Absence of transparent data about compliance actions and account removals
The Extent of the Issue
The substantial scale of social media usage amongst Australian young people underscores the compliance challenge confronting both the government and the platforms themselves. With numerous accounts already removed or restricted since the implementation of the ban, the figures paint a picture of extensive early non-compliance. The eSafety Commissioner’s findings suggest that the operational and technical barriers to implementing age restrictions have turned out to be considerably more complex than anticipated, with platforms having difficulty to differentiate authentic age confirmations from fraudulent ones. This intricacy has left enforcement authorities grappling with the fundamental question of whether existing age verification systems are adequate to the task.
Beyond the technical obstacles lies a wider issue about the willingness of platforms to prioritise compliance over user growth. Social media companies have consistently opposed stringent age verification measures, citing privacy concerns and the genuine difficulty of verifying age digitally. However, the regulatory report suggests that some platforms might not be demonstrating sufficient effort to deploy the infrastructure mandated legally. The move to active enforcement represents a critical juncture: either platforms will significantly enhance their regulatory systems, or they risk facing significant penalties that could reshape their business models in Australia and potentially influence compliance frameworks internationally.
What the Figures Indicate
In the first month after the ban’s implementation, Australian officials indicated that 4.7 million accounts had been limited or deleted. Whilst this number initially looked to demonstrate enforcement effectiveness, later review reveals a more layered picture. The sheer volume of account removals indicates that many under-16s had managed to establish accounts in the beginning, indicating that preventative measures were inadequate. Furthermore, the data prompts inquiry about whether suspended accounts represent real regulation or merely users removing their profiles of their own accord in in light of the latest limitations.
The restricted transparency concerning these figures has disappointed independent observers trying to determine the ban’s genuine effectiveness. Platforms have revealed little data about their implementation approaches, success rates, or the profile of suspended accounts. This lack of clarity makes it challenging for regulators and the wider public to determine whether the ban is functioning as designed or whether teenagers are just locating alternative ways to reach social media. The Commissioner’s insistence on thorough documentation of consistent enforcement practices reflects mounting dissatisfaction with platforms’ reluctance to provide full information.
Sector Reaction and Opposition
The major tech platforms have responded to the regulator’s enforcement action with a mixture of assurances of compliance and doubts regarding the ban’s practicality. Meta, which runs Facebook and Instagram, emphasised its dedication to adhering to Australian law whilst at the same time contending that accurate age determination remains a major challenge across the industry. The company has called for a different approach, proposing that strong age verification systems and parental consent requirements put in place at the application store level would be more effective than enforcement at the platform level. This stance demonstrates wider concerns across the industry that the existing regulatory system puts an unrealistic burden on individual platforms.
Snap, the developer of Snapchat, has taken a more proactive public stance, stating that it had locked 450,000 accounts following the ban’s implementation and claiming to continue locking more daily. However, industry observers dispute whether such figures reflect authentic adherence or simply represent reactive account management. The core conflict between platforms’ business models—which historically relied on maximising user engagement and growth—and the statutory obligation to systematically remove an entire age demographic remains unresolved. Companies have long resisted rigorous age verification methods, pointing to privacy issues and technical constraints, establishing an impasse between regulators and platforms over who bears responsibility for execution.
- Meta argues age verification should occur at app store level rather than on individual platforms
- Snap states to have locked 450,000 user accounts since the ban’s implementation in December
- Industry groups cite privacy issues and technical challenges as impediments to effective age verification
- Platforms assert they are making their best effort whilst challenging the ban’s overall effectiveness
More Extensive Inquiries Regarding the Ban’s Impact
As Australia’s under-16 online platform ban moves into its enforcement phase, key concerns persist about whether the law will accomplish its stated objectives or merely drive young users towards less regulated platforms. The regulator’s first compliance report reveals that despite months of implementation, substantial gaps exist—children continue finding ways to bypass age verification mechanisms, and platforms have struggled to prevent new underage accounts from being created. Critics argue that the ban’s effectiveness depends not merely on regulatory oversight but on whether young people will genuinely abandon mainstream platforms or simply shift towards other platforms, secure messaging apps, or VPNs designed to conceal their age and location.
The ban’s global implications add another layer of complexity to assessments of its effectiveness. Countries including the United Kingdom, Canada, and various European states are watching Australia’s initiative closely, exploring similar legislation for their own populations. If the ban proves ineffective at reducing children’s online activity or cannot protect them from dangerous online content, it could undermine the case for equivalent legislation elsewhere. Conversely, if implementation proves sufficiently strict to truly restrict underage participation, it may inspire other nations to pursue similar approaches. The conclusion will potentially determine international regulatory direction for years to come, making Australia’s regulatory efforts scrutinised far beyond its borders.
Who Benefits and Who Is Disadvantaged
Mental health advocates and organisations focused on child safety have endorsed the ban as a essential measure to counter algorithmic manipulation and exposure to harmful content. Parents and educators argue that removing young Australians platforms designed to maximise engagement could reduce anxiety, enhance sleep quality, and decrease exposure to cyberbullying. Tech companies’ own research has acknowledged the mental health risks associated with social media use amongst adolescents, lending credibility to these concerns. However, the ban also eliminates valid applications of social media for young people—keeping friendships alive, obtaining educational material, and participating in online communities around common interests. The regulatory approach assumes harm outweighs benefit, a calculation that some young people and their families challenge.
The ban’s practical impact reaches past individual users to influence content creators, small businesses, and community organisations dependent on social media platforms. Young people who might have taken up creative careers through platforms like TikTok or Instagram now confront legal barriers to participation. Small Australian businesses that depend on social media marketing no longer reach younger demographic audiences. Community groups, charities, and educational organisations have trouble connecting with young people through channels they previously used effectively. Meanwhile, the ban unexpectedly advantages large technology companies with resources to develop age verification infrastructure, possibly reinforcing their market dominance rather than reducing it. These unforeseen effects suggest the ban’s effects reach well further than the simple goal of child protection.
What Happens Next for Enforcement
Australia’s eSafety Commissioner has signalled a notable transition from hands-off observation to direct intervention, marking a pivotal moment in the execution of the youth access prohibition. The regulator will now collect data to establish whether companies have neglected to implement “reasonable steps” to prevent underage access, a regulatory requirement that surpasses simply recording that minors continue using these systems. This strategy requires concrete evidence that companies have introduced suitable mechanisms and procedures designed to exclude minors. The Commissioner’s office has indicated it will pursue investigations methodically, building cases that could lead to significant fines for failure to comply. This transition from observation to intervention reveals growing frustration with the companies’ present approach and signals that voluntary cooperation on its own will not be enough.
The implementation stage presents critical issues about the appropriateness of fines and the operational systems for maintaining corporate responsibility. Australia’s legislation provides compliance mechanisms, but their success relies on the eSafety Commissioner’s willingness to pursue regulatory enforcement and the platforms’ capability to adjust substantively. Overseas authorities, especially regulators in the United Kingdom and European Union, will carefully track Australia’s regulatory approach and results. A robust enforcement effort could set a blueprint for additional countries evaluating similar bans, whilst shortcomings might undermine the entire regulatory framework. The next phase will prove crucial whether Australia’s innovative statutory framework delivers real safeguards for teenagers or stays primarily ceremonial in its effect.
