Fraud Management & Cybercrime , Governance & Risk Management , Privacy
Australia on Track to Ban Social Media Access for Minors
Advocacy Groups Call for Government to Regulate Social Media PlatformsThe Australian government is on track to introduce a bill in the Parliament to ban youths under the age of 16 from accessing social media platforms. The government says the move will protect children from online harm, but experts believe the blanket ban could become counter-productive very quickly.
See Also: Controlling Website Vulnerabilities to Protect Against Data Leakage and Magecart
Prime Minister Anthony Albanese said Friday the new bill to mandate 16 as the minimum age to access social media will be introduced during the next Parliament session beginning Nov. 18. The bill, which has bipartisan support, follows extensive government consultation with parents, caregivers, child development experts, academia and state governments.
"Social media is doing social harm to our kids. I'm calling time on it," Albanese said. "I know one of the biggest issues worrying mums and dads is the impact social media is having on their children's well-being. This is a national challenge that requires national leadership. That's what our government is stepping up to deliver."
The prime minister stressed in an opinion piece that it is hard for young children to deal with nastiness on social media platforms, spotting the fakes, or measuring themselves against the unattainable standards of curated images.
Social media, he said, is "used as a weapon for bullies, a platform for peer pressure, a driver of anxiety, a vehicle for scammers and, worst of all, a tool for online predators. Parents feel they are working without a map. Parents are worried about where all this leads and they're looking to us to help."
Albanese's statement follows the South Australian government proposing the Children (Social Media Safety) Bill 2024 to ban children under 14 from accessing social media platforms and requiring parental consent for those aged 14 and 15. The proposal relied on an in-depth legal examination by former High Court Chief Justice Robert French on the impact of social media use on children's mental health, well-being and development.
South Australian Premier Peter Malinauskas said the bill will impose an obligation on social media platforms to use available technology and processes to prevent access to restricted age ranges. "The system would be overseen by a regulator, which would be responsible for monitoring compliance and issuing a range of sanctions to social media platforms for breaches, including substantial financial penalties," he said.
"When we see products doing children harm, whether it be drugs, cigarettes or alcohol, governments have a role to play. The addictive nature of social media is no different," he added. "And my intent is clear, we are going to do something about it. Ultimately, we want to see a legal framework in place across the country."
ESafety Commissioner Julie Inman Grant, who will oversee the compliance of social media platforms, welcomed the government's proposal to ban social media for under-16 year olds but stressed on the importance of maintaining a balance between protecting the young from online harms and supporting the positive experiences they can have online.
Grant said young people need support for digital literacy, resilience and preparing for the online environment they will inevitably inhabit. "Prevention and education have always been a foundation stone of eSafety’s work and will remain so," she said, adding that social media platforms must be built with safety by design so that they are safe for children to use.
In her opening address at the Senate Estimates on Nov. 6, Grant lauded Apple's initiative to introduce a feature that allows Australian children to easily report unwanted nude images directly to the company. "We believe regular reporting will keep the pressure on companies to make meaningful safety improvements - and we need to see a significant safety lift from these tech giants," she said.
According to Grant, 84% of 8 to 12 year olds in Australia use at least one online service. She said her office received 2,693 reports of serious online abuse directed at children in FY 2023, up 37% from the previous year. She said eSafety in July directed leading online platforms to draft enforceable codes that will protect children from various online harms, including exposure to inappropriate content. "If the codes don't meet appropriate community safeguards, I will consider moving to mandatory standards where I will set the rules for them," she said.
Banning Children May Not Help
The government says it has communicated extensively with parent groups, caregivers, child development experts and academia before arriving on its decision, but many advocacy groups and academic institutions still believe the ban is practically unworkable and the government must focus on alternate initiatives to protect children from online harm.
The Australian Child Rights Taskforce, which is made up of over a hundred organizations that advocate for the rights of Australian children, said a blanket ban is "too blunt an instrument to address risks effectively" considering children and young people access information, build social and technical skills, learn about the world around them, and play and connect with family and friends online.
Echoing the eSafety Commissioner's policy, the advocacy group says the government must instead regulate social media platforms and mandate minimum safety standards to make these platforms safer for children to use. "The work of keeping platforms responsible and building awareness of risk and responsibility amongst all users is a challenge but will provide longer term benefits," it said.
Sunita Bose, managing director of the Digital Rights Group, called upon the government to listen to mental health experts, marginalized groups and the eSafety youth council to come up with a solution that does not push children into unsafe, less visible parts of the internet.
A study conducted by the University of Sydney in 2023 is a case in point. Researchers found that a vast majority of children who use a wide array of social apps, messengers and online games know about online harms and scams and take effective steps to protect themselves, such as by avoiding scams and suspicious links, declining follow requests from strangers, disabling location services, blocking abusive users and deleting their profiles or apps.
"One of the most interesting things I discovered during the research is young people are incredibly capable when it comes to digital technologies and have developed a range of tactics that help them stay safe and navigate their digital lives," said Jonathon Hutchinson, chair of the university's Media and Communications discipline. "What young people need help with is managing their data and removing content quickly - that's where parents and carers can really help."
Accurate Age Verification Tech Doesn't Exist
Prime Minister Albanese said once the bill is approved, the government, industry and the eSafety Commissioner will get at least 12 months to implement systems and processes. According to researchers at the University of Melbourne, this is because existing AI-powered age verification technologies are not accurate and may estimate a person's age within a five-year range. It may be possible that the technology might guess a 13-year-old as an 18-year-old or vice versa.
The government launched a AU$6.5 million Age Assurance Technology Trial in May to test the maturity and effectiveness of age assurance technologies to determine if a person is an 18-year-old or is 13 to 16 years old. The tender document says the trial's outcome will inform the government's future decision on implementing age assurance technologies to reduce children's exposure to age-inappropriate content and social media.
But the researchers remain circumspect. "Computer scientists have yet to develop any age assurance technology that is simultaneously private, accurate and reliable," they said. So, will the absence of accurate age assurance technology derail the government's plan?
The researchers said in the absence of accurate technologies, the government may mandate people to use their digital IDs to authenticate their age on online platforms, but that could raise significant data security and privacy challenges. The only logical solution is to work with online platforms to develop better tools to help parents educate their children and curate the content their children can see.
"The path forward is to engage with technology in a way that makes our lives, and our children’s lives, more fulfilling, safe and diverse," researchers said.