Social Media Regulation: A Proposal
Fine - you drove me to it. Last time I posted about this I hit the number 1 spot on Hacker News and was (rightly) skewered for not providing a solution - just listing a bunch of problems. This time…I have answers.
Before I get onto them, I’d like to thank my colleagues with whom I holidayed (see my previous post) and ultimately sharpened these ideas. So, let’s get started - how do we set about banning (or not) social media?
Guiding Principles
I have absolutely zero interest in telling teenagers where or how they can talk to each other. Every generation of adults bemoans the conversations of their teenagers and thinks their minds are being warped. Broad bans of social media are toxic because of exactly this. Teacher. Leave them kids alone.
That means that I’m only going to be regulating algorithmic feeds. I don’t have evidence that they’re more harmful than the combination of smartphone cameras and Instagram, or talking with your friends in an unregulated way…but they’re the bit I’m most scared of so I’m starting there.
I have very little interest in uniquely targeting teenagers with
regulations until evidence says that they are uniquely likely to be harmed by
social media. Therefore, any regulations that hit protect teenagers are going to have
to come with some bonus new regulation for adults.
I strongly believe that any approach must be:
- Gradual - I can’t abide the thought of dropping a 16/18 year old with no experience of social media whatsoever into the bull pen that is a state-of-the-art algorithmic feed. That seems so obviously wrong as to be laughable. It’s also the currently proposed approach where I live.
- Practical - I can’t abide calls to “regulate the algorithms” without any degree of specificity. Think like you work at a tech company (I do) and make concrete suggestions that could actually be implemented.
Right, with that out of the way, let’s begin with ~8 year olds.
Young Children
The simple answer would be to ban algorithmic feeds for young children, but that doesn’t sound practical (is ‘most popular’ algorithmic?) or gradual (an outright ban never is). Let’s start by defanging the feeds a little.
I state that for children under the age of 11, any ‘feed’ of content can only use geography, language, time and age as features. No personalisation. No more granular features. Any feed will ultimately have to wring whatever signal it can out of the most popular/recent content for your geography, language and age.
What constitutes an algorithmic feed? Any content chosen to be displayed. News apps, social feeds, Spotify’s homepage…anywhere where an algorithm (of which ‘most popular’ is one, ‘most recent’ is another) is determining which content to show a user.
I dislike regulation that only applies to certain apps/surfaces/company sizes - are we trying to protect children or are we trying to hamstring incumbents? This is universally applicable.
Note that I’m fine with ‘per-surface’ popularity or recency - you can show the most popular sport stories in the ‘sport’ subsection, and the most popular weather stories in the ‘weather’ subsection. But no working out whether a specific 10 year old is more interested in sport or weather and tailoring their homepage to reflect that. But you can work out what all 10 year olds are into and have their homepage reflect that.
Older Children
Given how I’ve stated my opening gambit for younger children, and the principles I’ve laid out, I think you can tell what I’m going to say for older children.
We’re gradually going to allow more and more demographic features to be used. Self-reported gender (which, as any data scientist will tell you, is gold for recommendations), local geography, stated choice (self-selection of topics of interest)…ultimately we’re introducing more and more features to allow for better and better recommendations in any content feed.
Isn’t that making their algorithmic feeds stickier and stickier, potentially leading to more and more harm? Maybe.
But remember - as soon as they hit $AGE_OF_CONSENT$ they’re going to be hit with the full-force of personalised algorithmically curated feeds that are designed to circumvent the decision-making part of their brain and go straight to the automatic response system.
It might sound scary when I lay it out like this, but the alternative is…well, what? At some point we’re going to expose people to the full force of state-of-the-art algorithmic feeds. The least we can do is let them build towards it gradually.
I haven’t specified which ages certain features become available. I can if you want, but really that’s the kind of detail I’d expect to be put together in conjunction with tech companies with graphs of the extra recommendation power you get per demographic feature.
For Everybody
Last time I argued (by symmetry, and possibly poorly) that you shouldn’t ban social media for children without regulating the experience for adults. I drew analogies to the gambling industries and I’d like to do so again.
Gambling offers repeated reminders that you should be having fun, it allows you to set time and monetary limits, and it shows you how much you have spent.
Using the age old maxim that time = money, does the fact that algorithmic feeds spend your time rather than your money absolve them of those responsibilities? Currently, yes. But I’d argue that algorithmic feeds should come with exactly the same set of controls.
- “You have spent 22 hours this week scrolling”
- “You only have 15 minutes left on your 4 hour daily limit”
- “Would you like to set a time limit?”
- “Your usage is up by 17% this week compared to last week. Would you like to take a break?”
None of that sounds especially onerous and could all be comfortably implemented in relatively little time by companies operating algorithmic feeds. The only reason that they haven’t is that they haven’t been required to. Of course, all of these controls would be available to children (and their parents).
Downsides
I’m making the experience of using algorithmic feeds worse for everybody (in the short-term, at least). Rather than seeing content that works perfectly for you, you’re seeing the average content that works for everybody.
I’m killing a bunch of businesses - and potentially communities. Anybody who requires finding a niche is going to have a greatly reduced ability to do so.
I don’t really have any concrete proof that algorithmic feeds are (especially) bad for teens and yet they’ll be the ones to bear the brunt of this regulation. You could argue that we have to act based on the precautionary principle. I don’t know that I agree, but I’d rather see this regulation than an outright ban.
Children will probably still spend hours on end on ‘social media’, and will struggle with all the things that children and teenagers have struggled with for years (belonging, identity, group dynamics). There’s no regulation that will fix “being a teenager” and nor should there be.
Messaging is deliberately left wholly untouched - if we must intervene here (and maybe we have to - I acknowledge that the power to talk with children across the world in a wholly private manner is a new capability that previous generations didn’t need to wrestle with) then we should do so in a targeted and purposeful way.
This proposal requires social media companies to have some notion of the age of its users. There are serious privacy concerns that will come with this requirement. We are currently in the throes of bungling those privacy concerns in the UK.
Conclusion
I’ve proposed an implementable and impactful set of regulations for social media.
I think it avoids a lot of the problems with a “ban on social media for under-16s” that is currently (?) making its way through the Lords. But I still think it offers protection for teenagers as they find their feet in the world.
I also think it equates algorithmic feeds and gambling in an entirely sensible way.
What do you think?