A report on ‘The Truth Spectrum’: Fake news, disinformation, & propaganda

Last week I went to an evening event on fake news hosted by Quartz, General Assembly and Tech City at the Moorgate WeWork space (a brilliant space for events and free to use). It pulled an interesting digital, media and news crowd.

The three panels explored the history of fake news in science and how social networks, filter bubbles, and selection bias have created a media ecosystem where the facts start to blur into shades of grey. There was debate on the role of algorithms in spreading the news, interviews with the people making tools to stop fake news from proliferating, and discussion on the future of the news business.

Here is what I took away from the evening:

1. Fake news is in fact, old news

The science sector has been battling fake news for years and there is a great deal we can learn from. Akshat Rathi from Quartz interviewed Tracey Brown from Sense About Science, an organisation that advocates for science and facts. An example of its work would be battling popular views that vaccines lead to autism etc.

Tracey is particularly worried about the perception that the public is not interested in facts, that we live in bubbles and don’t care. Her experience is that people are open-minded and that the shrill voices don’t belong to everyone. They will often work very hard to understand something – there are flashpoints that motivate them. Can we help trigger these?

Coming back to science, most people ask Sense About Science whether science is credible. There is a lack of trust that comes from the science community being seen as part of the elite. Her advice is for scientists to be pragmatic and start with answering the questions people will want to ask, rather than the sell.

I was also interested in how they tackle celebrities who sometimes accidentally promote misinformation (think Sting and the planting trees anywhere can help compensate for rainforest deforestation). They always give them a way out, the space to come back and save face publicly.

2. Fake news feeds off emotions, not facts

This is why the best friend of fake news is social media. And it has now been weaponised by Donald Trump to refer to anything we don’t like.

Alastair Reid from the Press Association argued that there are psychological, societal and algorithmic reasons why social media is so good for fake news. Social connects people with similar views and we trust people we know. The echo chambers effect is snowballing. We are also victims of our own confirmation bias as we are predisposed to agree with what we already think. And there are no negative repercussions on social media – if you lie you get likes.

With barriers to entry for publishing falling away, there is a multitude of sources and it’s hard to know who to believe. There is a growing social crisis about ‘how we know what we know.’

It’s hard to see this improving. Social networks appear to be leading consumers to become far more dependent on single sources of news. New Pew Research Center research found that 50% of Facebook news readers only got their news from Facebook. This is truly troubling and makes it pretty hard to argue with being classified as a media organisation.

FT_17.11.02_SocialMediaNews_HalfofFacebooknews

There is a tension with the facts of media economics and the need for headlines to sell, but it still leads to the conclusion that the role of the journalist and trusted news organisations as truth checkers is becoming more critical. This great video from Quartz explores the future of journalism in a VR world and the importance of a trusted source when you could be physically immersed in an alternative ‘fake news’ reality.

There are a number of organisations working with social networks and the media to shine a light on facts. These include First Draft, Poynter’s International Fact Checking Network, and Full Fact.

3. Enlisting algorithms and tools to combat fake news

FactMata (partially funded by Google News) is building a fact-checking community powered by AI. Machine learning flags topics and then the humans take action, giving information a quality score. This is something they feel robots can’t judge effectively.

WikiPedia was also present and told us about how its 130,000 human editors are backed up by a bot army that automatically translates to other languages, creates posts based on wiki data, and identifies vandalism. It’s essentially striving for some sort of machine, human, lies, and facts equilibrium.

The most important question here came from the audience that night:

What’s worse? Fake news or those countering it and deciding what is trustworthy for us?

There is an important issue here about not giving up on humans’ ability to question and form arguments for and against opinions we may happen to personally disagree with.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s