Twitter has launched a new tool aimed at policing misinformation and disinformation on the internet and it appears to have taken inspiration from a familiar source.
The platform’s new “Birdwatch” feature takes an approach similar to Wikipedia, allowing users to debate and (hopefully) agree on what can be trusted on the site.
“We’ll use the notes and your feedback to help shape this program and learn how to reach our goal of letting the Twitter community decide when and what context is added to a tweet,” the platform said on its Twitter support account.
For the moment participation in Birdwatch is limited to the US.
Twitter’s vice president of product Keith Coleman called Birdwatch a “community-based approach to misinformation” that “might be messy and have problems at times” but “is a model worth trying”.
Twitter already occasionally adds context and labels to tweets of murky veracity but now wants “to bring more voices to the table to help determine when context should be added and what it should say” by outsourcing some of that decision making to its own users.
“Birdwatch allows people to identify information in tweets they believe is misleading and write notes that provide informative context,” Mr Coleman said.
For now the notes will only be visible on a separate Birdwatch site.
“Eventually we aim to make notes visible directly on tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors,” Mr Coleman said.
Mr Coleman said transparency is a focus, with data submitted to Birdwatch available for public download.
He added the site aimed to publicly publish the code for any algorithms developed to power Birdwatch, for instance ones that could determine a Birdwatcher’s reputation or decide on a consensus from multiple opinions.
RELATED: Board that will decide Trump’s fate
Wikipedia is the largest online example of people coming together to debate and decide on what should be considered as fact and has been for much of the last two decades since launching in 2001.
On the other hand, social media platforms have just gone through a decade that started with them helping users organise uprisings against tyrannical dictators in countries like Tunisia and Egypt, and ended with social media fuelled disinformation whipping supporters of former US President Donald Trump into a frenzy.
The riot in Washington D.C. on January 6 forced the hand of the social media companies that had until then largely avoided taking action over former President Donald Trump’s behaviour on their platforms.
“They (Twitter) did a poor job of dealing with him for a very, very long time,“ Wikipedia founder Jimmy Wales recently told AFP.
“He was clearly spreading disinformation, he was clearly being abusive to people.”
While Wikipedia has had its own critics — school teachers and university lecturers have been telling their students not to rely upon it since its very inception — in recent years the site has emerged as a beacon of hope on the internet.
Not driven by profit, the site is run by the Wikimedia Foundation, and despite its reputation as a place you can log on to say pretty much anything about anything, its community of moderators and editors apply a higher level of scrutiny than your average social media user, requiring links to reliable outside sources which are also subject to debate, as well as doing their own research like contacting the authors of said sources to verify what they meant.
Like other online communities it’s not without its own problems: Discussions can turn heated and at times lead to uncivil harassment, and the community skews male (leading to Gender Bias on Wikipedia, which you can read more about and even debate the merits of at where else but …).
The Wikimedia Foundation has acknowledged these problems and took steps to begin addressing them last year with trustees voting to bring in new policies.
“Harassment, toxic behaviour, and incivility in the Wikimedia movement are contrary to our shared values and detrimental to our vision and mission,” the board said at the time.
“The board does not believe we have made enough progress toward creating welcoming, inclusive, harassment-free spaces in which people can contribute productively and debate constructively.”