Author:
Posted on: 27.03.2024

#KateGate – Are conspiracy comms worth the risk?


Until Friday 22nd March, it had been almost impossible to scroll through more than four videos on your TikTok For You Page without another ‘Where is Kate?’ theory appearing on-screen. 

Serious physical illness? Nervous breakdown? Royal affair? BBL?! The online community’s two cents were getting wilder as the days went on, and the eventual announcement that the Princess of Wales is undergoing chemotherapy for cancer left a lot of commentators looking callous at best.

So it’s clear that public events like #KateGate are a huge driver of online engagement. With 352M views to #katemiddleton in the UK on TikTok alone in March, it’s now clear that online videos were influencing press stories, and perhaps even the behaviour of the Royals themselves – Kate shared her diagnosis after intense public pressure, having intended to stay quiet until Easter at the earliest.

But how? Whether something goes ‘viral’ comes down to timing and public interest, and this controversy nailed both. 

This isn’t a unique phenomenon. We’ve seen numerous social media conspiracies with enough momentum to shift national sentiment over the years; Beyoncé and Jay-Z are Illuminati leaders; Pizzagate; Avril Lavigne has died and is being impersonated; flat earth theory… the list goes on. It also happens on a micro-level, for example with true crime sleuthing communities who use the internet to track down fugitives and reopen cold cases. The difference is that this time the big reveal has ended with a heartbreaking illness, rather than a gossip-worthy revelation.

The stages of conspiracy theory escalation.

Whilst we love the thrill of an out-there theory, the dangerous thing is that social media breeds a shared identity toward conspiracy theory radicalisation by acting as an echo chamber – as seen so devastatingly over recent weeks.

Views generate income through platforms like TikTok, which has led to a surge in conspiracy videos often from anonymous accounts, made using freely available AI tools containing deepfake imagery and video. Once unconventional, unfounded beliefs are able to build momentum via:

 – Identity Confirmation: Users scan social media and online forums like Reddit to validate their own beliefs.

 – Identity Affirmation: People selectively choose information to support their beliefs, even if it means distorting facts. 

Identity Protection: Individuals defend their beliefs by discrediting opposing evidence, often through hostile online interactions.

Identity Enactment: Some seek approval from a wider audience, sometimes resorting to recruiting others or advocating for extreme actions.

These stages form a loop, strengthening a shared conspiratorial identity no matter how little evidence to support there actually is. 

What might have begun with a simple search – ‘Where is Kate?’ – affirmed conspiratorial suggestions when it was discovered that her PR team had released a doctored photo. Those who called for moderation were shouted down by ever-vocal online accounts, who then went on to share their own increasingly wild conspiracies and “evidence” from years-old photos or videos. So the cycle continued.

The need for better platform governance.

Of course, some theories are just plain silly and virtually harmless (like the claim that Obama can control the weather) but others spread lies, make people doubt the government and media, and sometimes even lead to violent behaviour.

Take the pandemic, fuelled by fake news sources and uninformed voices, many chose to believe that the virus was a government plan to control the population. These beliefs made people throw out their masks and avoid vaccines, putting themselves and everyone else at increased risk. 

The Center for Countering Digital Hate found that teens are significantly more likely to believe online conspiracy theories than older generations, which explains why these theories are able to gain so much traction on TikTok and Instagram.

Despite updates to platform community guidelines after political pressure (like that all AI generated content must be labelled as such), and 180 million fake TikTok accounts being taking down in the last three months, creators are getting wise to the ways to avoid detection with a large portion of harmful content going unflagged.

How brands can tap into these discussions responsibly. 

With bizarre theories and debated topics that consume national narrative taking up more of our feeds, how can brands contribute to these conversations in a responsible way? Because it’s clear that getting it wrong can be awkward at best. There are some swiftly-deleted ‘Where’s Kate?’ memes on a few brand pages to prove it.

But there is space for brands to get involved and leverage to their advantage, though the approach must be delicate and cautious.

So whilst joking about the Princess of Wales’ secret BBL shouldn’t have made it past the brainstorm stage, other conspiracy-related content has actually hit the mark – like this witty TikTok from Duolingo reacting to a growing theory among Gen-Z that Birds Aren’t Real, or Joe Biden leaning in to conspiracies that he and Taylor Swift fixed the Superbowl.

Saying this, don’t just post for posting’s sake. If the topic isn’t relevant to your industry and you don’t have anything entertaining or insightful to contribute – don’t. And above all, only share information from verified, credible sources. Businesses certainly don’t want to be adding to the spread of miscommunication, which has the potential to do serious harm to brand image. 

To sum it up, there is still a long way to go for platform regulators to monitor the millions of bizarre opinions and theories on social media, which we encourage…but simultaneously we secretly love the drama of it all.

Let's Talk.

    I consent to receive marketing communications from ilk Agency Ltd.

    Please note: If you wish to stop receiving communications from us, you will be able to do so by following the unsubscribe link in our emails