Blog Critical Developments

Andrew Tate, the manosphere and social media platform responsibility

Cover image for Andrew Tate, the manosphere and social media platform responsibility

Photo: @tingeyinjurylawfirm

Photo of Ashleigh Millar
by Ashleigh Millar

Millennials among us may remember Hunter Moore, a man that Rolling Stone aptly named “the most hated man on the internet”, and who is now the subject of Netflix’s recent documentary series of the same title. Skip ahead 12 years where social media platforms are abundant and we arguably have the new most hated man on the internet, Andrew Tate. This time, unlike with Moore, social media platforms are taking matters out of the court’s hands and into their own.

Hustler’s University and social media scrollbait

Ex-world champion kickboxer turned ‘King of Toxic Masculinity’ and self-professed success coach, Andrew Tate, has had a meteoric rise to infamy over the summer period. His recent explosive success was supported by controversial video clips and soundbites of what many deem to be offensive, harmful, misogynistic, and hyper-masculine propaganda that contributes to the manosphere.

Tate also established an online course, Hustler’s University, and the internet is divided as to whether it is a multi-level marketing scheme or simply an affiliate programme that claims to teach its members (largely young males) how to go from rags to riches. Either way, “students” are incentivised to promote clips of Tate and Hustler’s University to recruit new members and, as a result, encourage the social platforms’ algorithms to pick up and promote his social media presence. This is an important detail to remember – it is not just Andrew Tate who is advocating for himself and his business.

As a result of the controversy coming to a head with YouTube gaming influencer Daz Black criticising the social platforms for their lack of intervention, the decision has been made by social media giants Meta, TikTok, Twitter, and YouTube to ban Tate from their platforms. This is as a result of Tate breaching their policies and means that he will not be able to have an account or post content on their apps and sites.

However, the questions remain: a) how effective will this ban be in the long run? and b) who polices what is offensive enough to ban?

A social ban: Threat or promise?

While banning Andrew Tate can be seen as a statement that creators and influencers will lose their privileges to use platforms to spread hate speech, there is much to question if enough has been done. For example, while Tate’s accounts have been suspended, the accounts of his Hustler’s University members, fans, followers, and copycat accounts have not been banned from promoting his content and defending his actions (remember, these were the main pioneers of his social media success in the first place).

Furthermore, there are doubts about how quick to action these platforms were – was it too long before appropriate steps were taken, and has the lasting cultural influence already taken hold in the metaverse? With the current algorithm, is it possible to intervene quickly without public outcry drawing attention to it first? Tate’s content racked up 11.6 billion views on TikTok (despite campaigns and demands to have his content suspended) before action was taken by the popular social media platform. His affiliate marketing tactics allowed him to artificially boost his content through algorithmic manipulation and gain an enormous social media following in the space of three months.

Moreover, the Observer carried out an experiment on TikTok, posing as a teenage boy (bearing in mind the minimum age of account creation is 13), and found that the algorithm bombards this demographic with harmful misogynistic content. Surprising for an app that has controversially shadow banned and censored its users’ content for the use of certain words and hashtags in the past, whether they were warranted or not – meanwhile, campaigns, content flagging, and articles were brought to the surface throughout Tate’s three-month social media reign to no effect. Now, many fear that damage has already been done and the great divide in the metaverse between the manosphere and everyone else is only getting wider and wider. It also raises concerns that the ‘incel’ community is gaining more momentum in the defence of Tate’s ban and its infringement on freedom of speech.

Think NATO, but for the metaverse

Andrew Tate is not the first of his kind; take poker-player meets business tycoon meets social media influencer Dan Bilzerian. A man who rose to online fame in the 2010s, thanks to Instagram, is often seen objectifying women, promoting gun culture, and living a lavish lifestyle that many young men aspire to. Yet, despite having 33.5 million Instagram followers and what many feminist organisations deem to be misogynistic content that reaches a large audience, he has not been banned.

Furthermore, Hunter Moore, arguably the creator of ‘revenge porn’, was sentenced to two-and-a-half years in prison, followed by three years of supervised probation for his crimes of hacking accounts and stealing private photos of women and girls to post and ridicule on his site, IsAnyoneUp.com. Despite this, following the recent Netflix docuseries about Moore, the public noticed that he still had an active Twitter account, which they reported en mass and it was subsequently removed. While his original Instagram account has not been active in many years (although all previous content is still available to view), he has created another account that has been recently active and updated – not to mention his two TikTok accounts.

So, who is the judge and jury in this metacourt? Who makes the laws, who polices the content and users, and who bangs the gavel? While these social media platforms are private companies, with the rapid development of social 2.0 and web 3.0, and the threat of incel attacks on the public spawning from online manosphere communities, it raises similar questions about regulation and policing as previous episodes have, such as the role of social media in the 2016 US presidential election, the Brexit vote, and in numerous child safety cases. Particularly, as some powerful influencers, like Andrew Tate, are punished only after they have caused widespread negative impact, while others, like Hunter Moore and Dan Bilzerian, are left to continue their actions.

As social platforms increasingly position themselves as content platforms, user-generated or not, it becomes ever harder for them to maintain the position that they have no curatorial duties or responsibilities for the content. While some things are prohibited on these platforms already, others, despite going against community guidelines, are left to spread virally – amplified by proprietary algorithms. Users have often taken it upon themselves to report, moderate, and pressure platforms into removing such harmful content, but the battle, as with Tate, is often uphill.

The role of social platforms in content moderation is one ripe for change, as they move from being purely social and community-driven platforms to content platforms in their own right, with all the bells and whistles of revenue streams and promoted discovery built in. The creator economy means users are now making abundant content in every possible niche and format – but someone must moderate it. Perhaps it is time for the social platforms to rethink their stance, if they do not, it will only be a matter of time before regulation takes the decision out of their hands.

The discussion around this post has not yet got started, be the first to add an opinion.

Trending

Add your comment