Governance of a City-State
To tackle deliberate online falsehoods, look also to non-legislative efforts

In the first two weeks of public hearings by the Select Committee on Deliberate Online Falsehoods, the role of legislation in countering deliberate online falsehoods was debated by academics and experts.

Several experts agree that legislation can play a role in the fight against deliberate online falsehoods, but there are gaps in existing legislation due to rapid advancements in technology which need to be plugged.

However, experts also agree that legislation cannot be the only solution. First, legislation will always lag behind the increasingly sophisticated technological tools that perpetrators use to produce and disseminate online falsehoods.

Second, relying solely on legislation may cultivate over-reliance among citizens on authorities to help them discern truth from fiction.

Thus, a suite of non-legislative interventions with both near-term and long-term effectiveness is required to complement legislation.

Measures with near-term efficacy such as self-regulation by Internet intermediaries – for instance Facebook, Google and Twitter – and fact-checking provide interim relief to the problem, while measures such as strengthening critical literacy will boost people’s “immunity” against falsehoods in the long term.

Self-regulation by internet intermediaries

As people spend increasingly more time in the online space, one important measure is for Internet intermediaries to self-regulate by leveraging their technological expertise to tackle the problem they are complicit in.

Technology companies have acknowledged the importance of self-regulation, experimenting with different approaches to counter deliberate online falsehoods both during election periods and on a day-to-day basis.

For instance, Twitter has improved its algorithm to stamp out bot accounts targeting election-related content, while Google and Facebook have committed to improving accountability and transparency regarding political advertisements on its platforms.

Facebook has also rolled out a new “Related Articles” initiative (which will give users more context about why a story is false) after realising that their previous “Disputed Tags” initiative may not be effective.

However, there are limitations to self-regulation by Internet intermediaries. For one, technology companies are still experimenting with various techniques and re-examining their philosophy and approach towards regulating content on their platforms. There is no fool-proof solution yet.

Second, some observers doubt that these technology giants would have sufficient will and incentive to self-regulate effectively because they would ultimately prioritise profit-maximising goals.

Third, technology companies still have a long way to go in terms of upholding standards of accountability and transparency as seen from Facebook’s failure to inform its users about breach of trust in the Cambridge Analytica scandal. Hence, other measures are needed.

Three-pronged approach: Government-led, industry-led and ground-up

Another important measure to tackle the problem of deliberate online falsehoods is fact-checking. Research shows that debunking inaccuracies in a piece of information and repeating corrective information can help mitigate the effects of misinformation.

Thus, it is crucial to provide and support fact-checking facilities that the public can access.

Currently, fact-checking efforts in Singapore are led by the Government.

One example is the Factually webpage run by the Ministry of Communications and Information, which seeks to dispel and clarify false information that has gained sufficient public attention, such as the WhatsApp rumour which claimed that Singaporeans’ Central Provident Fund savings will be transferred to their nominee’s Medisave account by default upon death.

While Government-led fact checking efforts are important, they are insufficient on their own. Research shows that people with low trust in government and institutions are more likely to believe in rumours, conspiracy theories and alternative narratives. During a crisis where the Government itself is embroiled in a deliberate online falsehood, people are likely to turn to non-government platforms to seek information or verification.

Thus, we should encourage the establishment of non-Government fact checking to complement government efforts by increasing the number of avenues that people can go to in times of doubt and fear. These can either be industry-led or ground-up initiatives.

One example of industry-led fact checking is the BBC’s fact-checking service, Reality Check, which targets fabricated content masquerading as real news.

We can look to Indonesia for ground-up fact-checking efforts that tap on collective participation — Indonesia’s Anti-Hoax Community counters the spread of false information online using a crowdsourcing-based application called Turn Back Hoax.

The application curates false information and hoaxes circulating on the Internet and on social media, and is a resource the public can use to verify information.

Strengthening critical literacy, recognising personal biases

Nevertheless, fact checking is not a panacea and has its limitations as well. It happens after the falsehood and considerable effort must be made to put up corrections in a timely fashion.

Thus, it is necessary to strengthen critical literacy among Singaporeans. The National Library Board’s SURE campaign, the Media Literacy Council’s Better Internet Campaign, and the Ministry of Education’s Cyber Wellness programme seek to increase information and media literacy among different segments of the population through various avenues. However, more can be done.

First, cultivating critical literacy should go beyond equipping people with the skills to recognise the features of a piece of online falsehood. It should focus on building competencies in questioning the content, source and motivations of a source.

Second, it should make people more aware of their biases and how the online space exacerbates them. Research shows that corrective information may not always change people’s beliefs, especially when the new information conflicts with their pre-existing beliefs.

Thus, critical literacy should help people recognise these cognitive biases that can hinder their assessment of the veracity of information they encounter online.

Third, critical literacy can be better incorporated into the core education curriculum, and not taught in silos.

There is no single solution to the problem of deliberate online falsehoods. Only by deploying a suite of measures with complementary strengths will we stand a chance in the fight against deliberate online falsehoods.

 

This commentary is written based on the authors’ submission in their personal capacity to the Select Committee on Deliberate Online Falsehoods.

Shawn Goh is Research Assistant, and Dr Carol Soon is a Senior Research Fellow at the Institute of Policy Studies.

This piece was first published on Channel NewsAsia on 24 March 2018.

Top photo from IStock.

Subscribe to our newsletter

Sign up to our mailing list to get updated with our latest articles!