Government intervention in digital entertainment and social media is now a reality
Photo: Daria Nepriakhina
The tension between rapidly innovating tech and government regulations seems to be peaking at long last. China has made the first comprehensive move to crack down on digital entertainment, with children under 18 now regulated to no more than three hours of gaming per week and one hour per day on weekends. The new anti-addiction rules also cover social media, banning the ‘endless scroll’, and curbing the ‘addictive’ or otherwise harmful properties of other apps focusing on the ‘black box’ of algorithmic content.
Meanwhile the UK is currently contemplating its own social media intervention, with a draft Online Safety Bill proposed in May of 2021 that is to be further addressed in December of this year. The bill would impose a ‘duty of care’ on providers to regulate harmful activity online including cyber bullying, misinformation, and age-inappropriate content or content promoting violence or self-harm.
These regulations are perhaps timely in the wake of the Covid-driven move to digital-first in many aspects of life, but arguably they could be considered too little, too late. Curbing the ‘wild west’ of a fast-moving digital space which has historically outpaced the capacity of regulators to even understand it, much less control its expansion, is no small task – and the potential negative consequences of misapplying regulation (including the potential for negative public sentiment should there be missteps) is a significant risk.
The problematic aspects to social media are well documented. Privacy violations were made clear in the Cambridge Analytica scandal in 2016. Mental health concerns over the addictive qualities of likes and the endless scroll have been raised for many years now. This has highlighted the diverging interests of good metrics to show shareholders. Time spent on platforms and high engagement is now coming up against the real impacts acquiring those metrics has on the lives of the users who contribute to them. Not to mention, of course, the misinformation problem that has influenced political elections and now threatens the efficacy of vaccine rollouts to end the coronavirus pandemic (which has now eaten into nearly two years of ‘normal’ life across the globe).
Will these regulations be enough, and will they have the intended effect? They come at the heels of a realisation that asking companies to self-regulate is not enough to manage these issues. China’s focus on algorithmic content, and the negative effects it can have, places the onus onto tech and media providers to be transparent and keep an eye on their content, rather than just their engagement metrics. Britain’s broad goal to become the “safest place in the world to be online”, on the other hand, may be a big ask, particularly when the primary tool to do so is potential fines on internet service providers. In the US, which – once again – is looking at whether or not to ban TikTok, the motivation is more in line with political grandstanding than it is with actual concern for users. With lax domestic online privacy laws (outside of outlier states, such as California with its California Consumer Protection Act), and additionally as the home of most of the tech giants who are guilty of overlooking their own negative side-effects, it will probably have a particularly difficult time navigating the web safety world.
Nevertheless, it is clear that the political consensus has now moved to taking a cold hard look at what the big tech companies are doing – and regulation is sure to follow in some form or another. Some companies are moving first, like Instagram, which is rolling out the feature of making likes optional. Others will have to adapt to survive. One thing is certain: the social media sphere is about to focus on becoming human-friendly.