Blog Critical Developments

Regulatory decisions around AI will shape the future of digital ecosystems

Cover image for Regulatory decisions around AI will shape the future of digital ecosystems

Photo: Google DeepMind

Photo of Hanna Kahlert
by Hanna Kahlert

Looking back, the Francis Haugen testimony spelled a turning point for social platforms. It represents a moment where they, and tech more generally, went from being able to ‘build fast and break things’ with no one really paying attention, to an environment where not only users were watching closely, but regulators were too.

MIDiA has been writing about this for years, but change happens little by little and then suddenly all at once. We are not quite there yet – but the effects will extend beyond Meta, Google, and social more generally, and are in part being sparked now by AI. 

The US Department of Justice (DOJ) is taking Google to court over ‘abuse of its online search monopoly’ to hinder competition and extract bigger profits. The DOJ also has another lawsuit in the works against Google’s ad-tech business and the FTC has hinted at motions to go after Meta’s social empire. Spotify has recently published a sort of ‘call to action’ essay condemning Apple’s ability to stifle competition in its own app store (in the US and elsewhere, except Europe, where legislation has already been passed). It seems there are indeed advantages to owning the entire user journey and value chain – so much so that it becomes problematic for free market principles. 

All these back-and-forths are nothing new, however. Legal proceedings take years and lawyers find loopholes. It is unlikely that we will see much in the way of change for the role tech giants have in our daily lives. More interesting, however, is when we add AI into the mix. 

AI is both an opportunity and a threat. It can reduce costs and increase output, but this also results in job cuts and a reduction in creative diversity on the internet. The big on-the-ground disputes at the moment are over rights ownership and copyright: who owns what the AI’s create? Despite this, apps like Spotify are already trialing AI to ‘DJ’ bespoke music for users throughout the day, and it is likely only a matter of time before a proposition comes out that can just make the music itself, undercutting artists further. More broadly, the main challenge of AI is that it makes work, especially (well-paid) digital-first work, so efficient that job losses are sure to abound amidst a broader re-evaluation of labour itself. This is very much a challenge for governments and regulators– one that they are clearly starting to take seriously. 

Japan has come down on the AI-friendly side, with few restrictions over the data that can be used in training sets. Many groups are internally voicing concerns that this favours copycats over original creators and that greater clarity is needed. Meanwhile, France is looking to pass legislation that would require AI software to request permission before using copyrighted works, to stamp AI-generated output as such with credit to the original creators, and tax companies that exploit AI-generated works that do not do so. 

Firstly, note that regulation is coming quickly in response to these developments, as opposed to the decades it has taken for most digital developments. Secondly,regulators no longer seem to be as friendly to the uninhibited growth and development of these companies as they used to be. And finally, the ensuing map of regulation and AI use will contrive to determine which markets will grow rapidly at the crest of the AI-powered wave, and which will not.

‘AI-friendly’ markets will see more of this uninhibited development and growth of the ensuing web 3.0 sectors (at a high social cost). Companies that optimise for this environment will dominate. Meanwhile, more creator-protective markets, like Europe, will likely continue to pass stringent protections and rules around the use of AI. As a result, international companies that have already succumbed to the ‘black box’ problem (basically, there is no visibility into how training data results in AI-generated content, which makes attribution almost impossible as well as a whole host of other issues), will have a lot of adapting to do just to enter those markets – including but not limited to re-training models from scratch or needing to find monetisation models that don’t rely on explicitly selling access. 

Companies do not have any inherent incentive to care about protecting consumers or creators. This is why regulatory powers exist, in the hands of elected government officials. These regulators are now in an environment where they must make choices when it comes to tech. They must either call them out and start being stricter with the activities of these monolithic tech companies, or allow the dominant flexibility of AI development with a few slaps on the wrist. The rising tide of lawsuits is hinting that perhaps Western governments may come down on the side of people rather than corporate profits. Whatever the outcome, tech companies far and wide will feel the effects and will have a choice of their own: embrace protections and social wellbeing, or abandon such protective markets in favour of growing ones with fewer humanitarian concerns. There will likely be some space for both as such things never end up being simply black and white. AI-centric companies, for example, could likely operate under stricter regulation without monetisation, whereas in such an environment the premium of human input would (if regulation were structured correctly) make it a more economically viable option. Tech companies should pay attention to the regulatory winds to optimise their products for the future marketplace.

The discussion around this post has not yet got started, be the first to add an opinion.

Trending

Add your comment