Media

Australian Government Votes to Ban Under 16s from Social Media Apps


Despite conflicting evidence around the viability and value of the plan, the Australian Government has now voted to implement a new law that will force all social media platforms to ban users under the age of 16.

The controversial bill was passed late last night, on the final full sitting day of parliament for the year. The government was keen to get the bill through before the end-of-year break, and ahead of an upcoming election in the nation, which is expected to be called early in the new year.

The agreed amendments to the Online Safety Act will mean that:

  • Social media platforms will be restricted to users over the age of 16
  • Messaging apps, online games, and “services with the primary purpose of supporting the health and education of end-users” will be exempt from the new restrictions (as will YouTube)
  • Social media platforms will need to prove that they’ve taken “reasonable steps” to keep users under 16 off their platforms
  • Platforms will not be allowed to require that users to provide government-issued ID to prove their age
  • Penalties for breaches can reach a maximum of $AUD49.5 million ($US32.2 million) for major platforms
  • Parents or young people who breach the laws will not face penalty

The new laws will come into effect in 12 months’ time, giving the platforms opportunity to enact new measures to meet these requirements, and ensure that they align with the updated regulations.

The Australian Government has touted this as a “world-leading” policy approach designed to protect younger, vulnerable users from unsafe exposure online.

But many experts, including some that have worked with the government in the past, have questioned the value of the change, and whether the impacts of kicking youngsters off social media could actually be worse than enabling them to use social platforms to communicate.

Earlier in the week, a group of 140 child safety experts published an open letter, which urged the government to re-think its approach.

Read More   Elemental’s poor sales add to fears that Disney’s Pixar has lost its touch

As per the letter:

The online world is a place where children and young people access information, build social and technical skills, connect with family and friends, learn about the world around them and relax and play. These opportunities are important for children, advancing children’s rights and strengthening development and the transition to adulthood.”

Other experts have warned that banning mainstream social media apps could push kids to alternatives, which may see their exposure risk increased, as opposed to reduced.

Though exactly which platforms will be covered by the bill is unclear at this stage, because the amended bill doesn’t specify this, as such. Aside from the government noting that messaging apps and gaming platforms won’t be part of the legislation, and verbally noting that YouTube will be exempt, the actual bill states that all platforms where the “sole purpose, or a significant purpose” is to enable “online social interaction” between people will be covered by the new rules.

Which could cover a lot of apps, though many could also argue against it. Snapchat, in fact, did try to argue that it’s a messaging app, and therefore should not be included, but the government has said that it will be one of the providers that’ll need to update its approach.  

Though the vague wording will mean that alternatives are likely to rise to fill any gaps created by the shift. While at the same time, enabling kids to continue using WhatsApp and Messenger will mean that they become arguably just as risky, under the parameters of the amendment, as those impacted.

To be clear, all the major social apps already have age limits in place:

So we’re talking about an amended approach of 3 years age difference, which, in reality, is probably not going to have that big of an impact on overall usage for most (except Snapchat).

Read More   Fox News and Dominion face off in court over 2020 election claims

The real challenge, as many experts have also noted, is that despite the current age limits, there are no truly effective means of age assurance, nor methods to verify parental consent.

Back in 2020, for example, The New York Times reported that a third of TikTok’s then 49 million U.S. users were under the age of 14, based on TikTok’s own reporting. And while the minimum age for a TikTok account is 13, the belief was that many users were below that limit, but TikTok had no way to detect or verify these users.

More than 16 million youngsters under 14 is a lot of potentially fake accounts, which are presenting themselves as being within the age requirements. And while TikTok has improved its detection systems since then, as have all platforms, with new measures that utilize AI, and engagement tracking, among another process, to weed out these violators, the fact is that if 16-year-olds can legally use social apps, younger teens are also going to find a way.

Indeed, speaking to teenagers throughout the week (I live in Australia and I have two teenage kids), none of them are concerned about these new restrictions, with most stating simply: “How will they know?”

Most of these kids have also been accessing social apps for years already, whether their parents allow them to or not, so they’re familiar with the many ways of subverting age checks. As such, most seem confident that any change won’t impact them.

And based on the government’s vague descriptions and outlines, they’re probably right.

The real test will come down to what’s considered “reasonable steps” to keep youngsters out of social apps. Are the platforms’ current approaches considered “reasonable” in this context? If so, then I doubt this change will have much impact. Is the government going to impose more stringent processes for age verification? Well, it’s already conceded that it can’t ask for ID documents, so there’s not really much more that it can push for, and despite talk of alternative age verification measures as part of this process, there’s been no sign of what they might be as yet.

Read More   ITV launches gen AI ad production service for TV newcomers

So overall, it’s hard to see how the government is going to implement significant systematic improvements, while the variable nature of detection at each app will also make this difficult to enforce, legally, unless the government can impose its own systems for detection.

Because Meta’s methods for age detection, for example, are much more advanced than X’s. So should X then be held to the same standards as Meta, if it doesn’t have the resources to meet those requirements?

I don’t see how the government will be able to prosecute that, unless it actually lowers the thresholds of what qualifies as “reasonable steps” to ensure that the platform/s with the worst detection measures are still able to meet these requirements.

As such, at this stage, I don’t see how this is going to be an effective approach, even if you concede that social media is bad for teens, and that they should be banned from social apps.

I don’t know if that’s true, neither does the Australian Government. But with an election on the horizon, and the majority of Australians in support of more action on this front, it seems that the government believes that this could be a vote winner.

That’s the only real benefit I can see to pushing this bill at this stage, with so many questionable elements still in play.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.