In recent times Africa and Europe have experienced growing rates of online disinformation – meaning the deliberate spread of false or misleading information through digital channels – which has caused serious public harm.
For instance, disinformation has threatened human rights by polarizing citizens on issues involving their security, health, environment, and democracy. Sadly, social media has been used by ‘influencers for hire’ to spread disinformation. In many cases, the ring leaders of spreading disinformation are politicians who are keen on silencing the voices that oppose their ideas. As a result, laws and policies have been enacted, that curtail freedom of speech, open internet, and digital rights.
In the case of Africa, pieces of legislation that also stifle access to genuine information have been enacted in countries like Cameroon, Uganda, Ethiopia and Kenya. This is troubling because these laws are ambiguous and cannot differentiate between disinformation and genuine information. Additionally, sometimes there are also difficulties for law enforcement agencies to interpret these laws. To this end, there have been arbitrary arrests of citizens exercising their freedom of expression and leading to (self-)censorship.
With this context in mind, this blog seeks to explain the most effective measures to be taken against disinformation while protecting human rights in the digital age.
Effective measures against disinformation
The following key stakeholders are crucial in tackling disinformation: Fact-checkers, tech companies, civil society actors, and the media. The main question that fact-checkers ask themselves is “How do we know that?” To answer this question, the following steps are taken: First, fact-checkable major public claims are identified by analysing social media, media outlets, and legislative outlets. Second, fact-checkers examine the evidence at hand. Third, on a scale of truthfulness, disinformation is evaluated. Understandably, the International Fact-Checking Network has developed a code of principles to assist fact-checkers in their work. When we invest in training fact-checkers, then they double-check facts and figures and help in quality control of the information that is accessible to the public.
Tech companies, through start-ups, have come up with products and tools that assist journalists, and even the public, to review content for authenticity before spreading disinformation. For example, NewsCheck is a United States-based start-up that has a content review and scoring platform. This start-up checks numerous ethical codes and involves human and machine collaboration. This helps in speeding up fact-checking content and it provides transparent validation scoring. Powerful machine learning algorithms have made it possible to create indistinguishable, fake media. Therefore, the solution for deep fake detection includes either algorithms that recognize irregular patterns in media or blockchain-based source verification. Defudger is a Danish start-up that has a three-layer system to authenticate digital content. The system uses computer vision and Artificial Intelligence (AI) to detect any deep fakes. Africa and Europe should invest in human and tech solutions to verify the authenticity of the information.
Civil society actors are often advocating against human rights violations and, in this case, the freedom of expression, open internet, and digital rights. It is imperative to note that civil society organizations should undertake public interest litigation cases challenging the retrogressive laws that undermine the above-mentioned rights. Further, for accountability purposes, they should monitor and report incidences of violation of digital rights. Multi-stakeholder collaboration between civil society actors and the media can help to debunk and moderate disinformation.
Media outlets should prioritize strategic communication in fighting disinformation. This is because disinformation campaigns not only disseminate fake news but also aim to spread a malicious story. In addition, media stations should ensure that they reduce the response times for complaints regarding disinformation content. This, in turn, minimizes the circulation of disinformation. Transparency in content moderation should be increased by the media, and they should conduct periodic policy reviews after public consultations. Multi-stakeholder collaborations that involve building the capacity of editors on fact-checking and journalists, will assist to counter disinformation online. Moreover, working closely with fact-checkers will ensure that we identify and expose disinformation. Finally, media houses should institute in-house systems to enhance fact-checking and information verification.
Protecting the right to freedom of opinion and expression
Striking a balance between tackling disinformation and the right to freedom of opinion and expression has become a daunting task for governments, civil society organizations, media and the public at large. This is because everyone tries to assert their opinions in order to translate them into public opinion, and as a result, political action.
The core element of freedom of opinion and expression is to enable citizens’ engagement in governance discussions. It is quite unfortunate that some people will go through extra lengths to share untrue information to intimidate their opposers. Despite having legal consequences for disseminating disinformation, freedom of expression serves as a goal of self-realization. This means that when people can fully express their views, then they should also recognise that disinformation might have to be tolerated and uncovered as untruthful rather than censored. There is no guarantee that the truth will always prevail in the war of different ideas, and this becomes the challenge in balancing the right to freedom of opinion and tackling disinformation.
From my vantage point, governments should desist from selectively applying laws on countering disinformation to target critics, media, political opposition, and human rights groups. Such repressive laws should be repealed and amended to provide clear definitions of disinformation and ensure they conform to international human rights standards. Additionally, intense training for law enforcement agencies as to what constitutes disinformation and how to combat it without stifling citizens’ rights should be provided.
The recommendations are separated into two categories: first, addressing the advertisement-based, attention and data-driven ecosystem, and second, defining the scope of content management duties for platforms. To break the vicious cycle of the ever-increasing attention harvesting and the consequent data harvesting, a three-step approach could be recommended:
Firstly, large companies should be obliged to spend a portion of their advertisement budget directly on news companies and at the same time, be obliged to ensure that they do not sponsor websites or content that carries disinformation, or is manipulative, with their advertisements.
Secondly, the diversity of social media content recommendations should be ensured by algorithms. To further improve the diversity of quality content and media pluralism, it is recommended to examine the possibilities of having a multi-stakeholder approach among government agencies and politicians. This work-intensive project offers immense opportunities for the future of stability and democracy.
Thirdly, affirmative information networks should be applied to prevent disinformation from taking root in societies. A centrally coordinated node should signal and give the raw information to the local network nodes (for example, election authorities, academic institutions, other authorities, and trusted NGOs), translating those into short, appealing content pieces like memes or videos and dispersing them among various social groups.
The second set of recommendations briefly addresses social media platforms’ content moderation issues as follows: The platforms should stick to their intermediary role and empower users to develop the information landscape through their choices. In this sense, platforms should ensure ideological neutrality of their content moderation and refrain from discriminating, while respecting human rights.
About the author
Winner of the third prize of the AU-EU D4D Hub’s #D4DBlogging competition, Lilian Olivia Orero is an International Human Rights Lawyer from Nairobi, Kenya. She is passionate about gender equality and data governance and has a particular interest in advocating for digital rights in data-driven infrastructure such as facial recognition, biometric systems, and Artificial Intelligence. She is currently a researcher with Berkman Klein Center for Internet & Society at Harvard University, and a fellow of the Kenya School of Internet Governance 2022.
The content of this blog is the responsibility of the author. The views expressed do not necessarily correspond to those of the European Union or the AU-EU D4D Hub implementing partners.