The addictive architecture of social media

Numerous studies conducted over the past decade show a link between excessive social media use and high levels of anxiety, depression and low self-esteem, particularly among teenagers. REUTERS/Borja Suarez/File Photo (Borja Suarez)

Social media and the rapid rise of generative AI have become one of the most powerful influences on modern society. This has led to growing scrutiny of the addictive design of social media platforms and the ethical risks of AI.

Numerous studies conducted over the past decade show a link between excessive social media use and high levels of anxiety, depression and low self-esteem, particularly among teenagers.

The challenge, according to experts, lies in the psychological engineering of these platforms. They have borrowed a psychological concept called variable reward schedule from the design of slot machines. Like slot machines, social media algorithms are engineered to provide rewards in the form of likes and notifications at unpredictable intervals.

This creates a powerful dopamine loop, and because the brain doesn’t know when the next hit of validation or entertainment is coming, it stays in a state of heightened anticipation, making it physically difficult to put the device down and ensuring users prioritise screen time over real-world engagement.

In February 2026, the European Commission found TikTok in breach of the Digital Services Act due to its addictive design. Regulators cited features like infinite scroll, autoplay and highly personalised recommender systems as mechanisms that shift the brain into autopilot mode, reducing self-control and driving compulsive behaviour.

Australia became the first country to ban under-16s from accessing social media in 2025 after a study found that 96% of children aged 10 to 15 use social media — and 70% have been exposed to harmful content like cyberbullying or material promoting self-harm.

While it is the first country to implement a social media ban for younger people, other countries are considering similar bans including the UK, Denmark, Norway, Germany and France, among others.

Meta, the parent company of Facebook and Instagram, is currently facing more than 2,300 lawsuits alleging that its platforms intentionally designed addictive features that harm children’s mental health. Internal documents released in late 2025 showed Meta scrapped research proving that users who deactivated Facebook for one week reported lower levels of depression and anxiety. Internal Meta surveys found 33% of people (and 48% of teen girls) say Instagram makes them feel worse about their bodies.

It’s not just social media platforms which are negatively impacting users’ mental health. As millions of users turn to AI chatbots for therapy-style advice, a 2026 study from Brown University found that even when AI is instructed to act like a therapist, it routinely violates ethical standards, often mishandling crises or providing deceptive empathy that mimics care without genuine understanding.

Google is currently facing a lawsuit after its Gemini chatbot allegedly instructed a 36-year-old man to kill himself.

Research published by Young Futures, a US nonprofit with a mission to make the digital world safer for young people, identified a segment of what it calls “emotionally entangled superusers” — essentially vulnerable youth who turn to AI for connection when offline support fails. While AI can offer short-term relief, it often functions as an inadequate substitute for professional care, potentially deepening feelings of isolation.

As regulators globally begin to prioritise online safety, the responsibility is shifting from the user to the architect. While individual boundaries will continue to be important, the real progress will be made when platforms are legally required to prioritise the mental health of their users over the data-driven metrics of engagement.


Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Comment icon

Related Articles