Representatives from big tech companies consistently describe their products and services as being "safe by design" for children. I’m not buying it.
TikTok says: “We've designed our app with safety in mind, implementing a 'safety by design' approach that ensures we are building protection for our users, including teens and their parents”.
Roblox claims its platform “was developed from the beginning as a safe, protective space for children to create and learn”.
Unsurprisingly, all of the popular services that children use make similar claims about the importance of the safety of their product. Claims that are at best, lofty ideals, and at worst, deliberately misleading.
There is plenty of evidence these claims are far from the reality of what children and young people are experiencing on platforms such as YouTube and TikTok, both of which use an incredibly powerful AI-powered recommender system to recommend content that maximises attention, rather than what is age-appropriate, content from friends, or even the kinds of things they’re interested in seeing.
A hard-hitting 2024 report by DCU’s Antibullying Centre showed how recommender algorithms being used in YouTube and TikTok shorts were actively fuelling the spread of misogyny and ‘male supremacy’ content.
There were two other notable findings to this research. Firstly, the speed with which the content started to appear was staggering, taking only 23 minutes after the account became active.
Secondly, once the user started to show any interest at all in the recommendations, the content increased dramatically in both volume and in toxicity.
Girls are not immune to being served harmful content either, though it tends to fall into different categories, such as eating disorders and body dysmorphia.
In a report published last year, the Centre for Countering Digital Hate reported YouTube’s recommender algorithm is pushing 13-year-old girls down rabbit holes of harmful content.
An analysis of 1,000 recommended videos found one in three were related to eating disorders, two in three focused on eating disorders or weight loss, and one in 20 involved content about self-harm or suicide.
Snapchat, which is incredibly popular (like YouTube and TikTok) with children who are under the platform’s own minimum age requirement of 13 (36% of eight-12 year olds use it, according to our latest research), promises it is “deeply committed to helping teens on Snapchat have a healthy and safe experience”, yet Snap Inc, its parent company, is currently facing multiple lawsuits in the US that allege its harmful design fosters addictive behaviors and exposes children to risky content, such as cyberbullying, substance abuse, and self-harm material.
The horrifying impact of this content is well documented in a soon to be released Bloomberg documentary called Can’t Look Away.

A more recent market offering has been Instagram’s teen accounts from Meta, which promises parents that “teens are having safe experiences with built-in protections on automatically [sic] .”
Certainly, at surface level, this new design sounded promising, with greater efforts made to ensure there would be far stronger protections in place from harmful content and harmful contact for those under 16.
But two recent reports suggest it’s not as safe by design as its widely circulated ad campaign would have us believe. Accountable Tech released a report this month which found despite Meta's controls, all accounts had been recommended sensitive, sexual, and harmful content, with minimal educational recommendations and distressing experiences reported by most users.
Another report published in April by the 5Rights Foundation titled Is Instagram Now Safe for Teens? had very similar findings.
There can be no doubt these online products and services widely used by children are far from being “safe by design”. They may not be intentionally harmful to children, but it is the consequence of their design. Their design is to deliberately maximise profit, not safety, which has largely been an afterthought.
How is it that other industries are expected to adhere to far more stringent regulations? The toy industry, for example, is subject to a strict range of regulations to sell into the European market, including the CE standard, which shows the toy meets European safety standards for children. This ensures the toy has passed checks for things like dangerous chemicals, durability, and age-appropriate design. This is all before it reaches the consumer.
It is simply inconceivable that an industry as powerful and as widely used by children is not subject to anything even close to this level of scrutiny and testing at the design stage. This is despite the passing of the EU Digital Services Act, which does attempt to put some minimum standards in place and impose risk assessments on some of the larger platforms.
It’s hard to fathom how WhatsApp was able to unleash its AI buddy — a virtual assistant powered by artificial intelligence — onto all of its subscribers’ feeds, with the small caveat that “some messages may be inaccurate or inappropriate".
Forty per cent of children aged eight–12 have a WhatsApp account in Ireland, according to our research. For any parents who may not want such a powerful resource in their children’s pockets, it’s unfortunately not a feature that you can unsubscribe from, despite it being described as "optional".
As we continue to navigate the online landscape, it’s clear the promise of "safety by design" from major tech companies is almost always falling short of its stated goal. While big tech must be held accountable for these shortcomings, it’s crucial that we demand stricter regulations and oversight from Government and regulators to ensure they are truly safe and appropriate for the young audiences they serve.
Until these changes are made, children will continue to face a digital environment that prioritises profit over their wellbeing and exposes them to very real harms.
Alex Cooney is chief executive of Ireland’s online safety charity, CyberSafeKids. Find resources to help protect children on its website.