Editor's Note

Sofia Calabrese, Digital Policy Manager at European Partnership for Democracy (EPD), explores the impact of the EU Digital Services Act (DSA) in combatting online disinformation and assesses its implications beyond the EU, specifically for Asia. Despite its scope restricted to illegal content, the author claims that DSA has the potential to reduce disinformation by introducing regulations that enhance transparency and accountability for online platforms. The author envisions that, through efforts to refine pertinent legislation and prevent misuse, the DSA could function as a guiding framework for Asian countries in regulating disinformation.

In 2022, after two years of heated negotiations, European Union (EU) institutions reached an agreement on the final text of the Digital Services Act, the new EU law to tackle illegal content online. “The Digital Services Act will set new global standards. [...] We have finally made sure that what is illegal offline is also illegal online,” said Christel Schaldemose, the lead Member of the European Parliament on the file, after the agreement was reached (European Parliament 2022).

 

In a nutshell, this reflects the ambitions of EU policymakers for the file to become both a new golden standard for platform regulation in the EU and beyond; and to fulfil its primary goal to tackle illegal content online.

 

This article will focus on these two aspects: on one hand we will explain how, despite its scope being limited to illegal content, the Digital Services Act can also be effective in addressing online disinformation; and on the other hand we will explore the potential impact of the Digital Services Act beyond the EU, in particular on potential regulatory regimes on disinformation in Asia.

 

Background: The EU Digital Services Act and the Political Context on Disinformation

 

The Digital Services Act (DSA) is a recently adopted EU Regulation aiming to address the proliferation of illegal content online. Formally signed into law in 2022, the DSA is currently in force. For some rules, particularly, those addressed at Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) - the deadline to ensure compliance was August 2023, while the remaining rules will be applicable as of February 2024.

 

The DSA was one of the main milestones within the EU digital policy agenda during the current European Commission’s mandate, alongside the Digital Markets Act - establishing rules to ban certain anti-competitive behaviors online; the Artificial Intelligence Act - introducing risk-based obligations for AI systеms; and broader rules on the governance and re-use of non-personal data.

 

More specifically, the DSA has put forward a series of rules applicable to different kinds of online intermediaries, differentiated by type and size - with obligations applicable in a cumulative way. Such obligations vary from rules and mechanisms for content moderation applicable to all sorts of online platforms to full-fledged risk assessments conducted only by VLOPs and VLOSEs.

 

The scope of the DSA is limited to illegal content and does not directly address harmful content online. Content such as bullying, harassment, (non-illegal) hate speech,[1] and online disinformation is therefore outside of the DSA’s scope. The exclusion of harmful content from the DSA was a conscious decision taken by the European Commission to avoid discussions on what would represent harmful content and whether it is reasonable to restrict free speech on something that, per se, is not illegal - with the risk of abuses and censorship by the part of platforms and governments alike.

 

In this framework, the solutions put forward in the DSA against harmful but lawful content are relatively indirect, as they focus on ensuring platforms are transparent and accountable for the moderation of such content. The EU rulebook is, however, not the only proposed solution in Europe. For example, the UK’s Online Safety Bill proposal has put forward a different approach, trying to set out clear policies on harmful disinformation with the largest audiences and a range of high-risk features (Government of the United Kingdom 2022a). Negotiators involved in the bill subsequently backtracked on the inclusion of harmful content due to the concerns of limiting freedom of expression, thus showing how controversial it is to regulate disinformation through legal measures (Government of the United Kingdom 2022b).

 

At the same time, the issue of online disinformation is becoming salient in the EU political discussions, especially following the start of the war in Ukraine, the recently re-ignited Israel-Hamas conflict and the upcoming EU elections. In this context, while the DSA does not address disinformation directly, it still contains several rules that aim to increase the transparency and accountability of online platforms. EU member states also have individually adopted different approaches to what should be the subject of content moderation when it comes to disinformation. France and Germany enacted restrictive national laws against election misinformation in 2018 and online hate speech in 2017, respectively. Other European states such as Austria, Bulgaria, Lithuania, Malta, Romania, and Spain have also recently introduced or modified regulations to fight disinformation (Hoboken and Fathaigh 2021).

 

Recent controversies and court rulings surrounding state fact-checking, new legislation, and online content moderation practices have increased attention on this subject (European Digital Rights 2020; Goujard 2021). In Hungary, for example, the 2020 Enabling Acts makes disseminating fake news a criminal act punishable with up to five years of imprisonment. While these laws have been used to tackle disinformation online, they also serve political purposes, as in some cases, they have been used to silence opposition and criticism of the regimes. Even at the EU institution level, the fight against disinformation has already been used to advocate for social media shutdowns in crises and prompt the removal of disinformation that goes beyond the provisions contained in the DSA (Goujard and Camut 2023; Meyers 2023).

 

In East Asia, there are many examples of such tension between tackling the spread of fake news and government abuses posing threats to free speech. It is a well-known phenomenon in which the governments themselves are actors spreading disinformation or using regulation against fake news to remove content of political opposition (Ong 2021). In Thailand, the ban on the dissemination of ‘false messages’ during the COVID-19 crisis has drawn criticism for trying to shield the authorities from public backlash of their handling of the pandemic (Reuters 2021). In Myanmar, the military regime has been working on a new cybersecurity law that, among other things, seeks to criminalize the use of VPNs to access banned Western social media platforms (Chau and Oo 2022). In Vietnam, ‘toxic content’ has generally been defined as content that is detrimental to the reputation of the authorities and the ruling Communist Party (Luong 2018). Finally, rights groups in Malaysia have called out its fake news law as a smokescreen to suppress online dissent (Guest 2021).

 

Given the similar challenges of tackling online disinformation and also preventing abuses by governments against free speech in both Europe and East Asia, it is worth considering how the EU solutions to tackle disinformation in the DSA would work in practice and whether they could truly represent the “global standard” that EU policymakers wish for.

 

How the Digital Services Act Can Be Effective Against Disinformation

 

While the DSA does not directly regulate neither harmful content nor, more specifically, disinformation, there are still several provisions that have the potential to have an impact on tackling online disinformation.

 

First, the DSA contains numerous transparency obligations for online platforms and how they moderate content online. For example, under the DSA more transparency is needed regarding the platforms’ terms and conditions; platforms must issue transparency reports on their moderation activities as well as to provide statements of the reason for the content removed; and VLOPs and VLOSEs will be subject to even broader reporting obligations. It will also be possible for users to file complaints against moderation decisions taken against them.

 

On top of these transparency obligations, VLOPs and VLOSEs will also be obliged to conduct systеmic risk assessments and take related mitigation measures, including those on fundamental rights and freedom of expression and information, civic discourse, and electoral processes. In compliance with the DSA, these assessments will also be subject to external independent audits.

 

Rules on data access for researchers complement the increased transparency. Research on social media discourse has been crucial in identifying problems and threats of disinformation. Researchers, however, have faced significant limitations in this work, due to problems with accessing data, as it was the platforms themselves deciding on access or signing voluntary commitments for specific categories of data, for example under the Code of Practice on Disinformation. Under the DSA, however, VLOPs and VLOSEs will be obliged to provide research organizations with the data needed to assess their compliance.

 

The DSA also strengthens platform obligations regarding recommender systеms as it requires them to explain the main parameters behind them and consider them in risk assessments and risk mitigation measures. Recommender systеms also play an important role in facilitating the spread of disinformation as they decide which content will be displayed to users, often using criteria that are not transparent and unknown to users and researchers alike. Some studies also investigated their tendency to favor more polemic or controversial content such as fake news.

 

Finally, the DSA also foresees the possibility for VLOPs and VLOSEs to sign voluntary codes of conduct to tackle specific challenges linked with systеmic risks which would be monitored regularly by different authorities in charge including the European Commission and the National Digital Services Coordinators.

 

Furthermore, the DSA is not the only instrument in EU legislation against disinformation and it works along with soft law solutions such as the EU’s new Code of Practice for Disinformation - which could soon transform into a code of conduct under the DSA. So far, this code has 34 signatories, including platforms, tech companies and civil society organizations. It sets out extensive commitments by platforms and industry to fight disinformation. Some commitments concern a cut of financial incentives for spreading disinformation and expanding fact-checking, among other things.

 

In parallel, work has also been ongoing to ensure media freedom in the EU with the European Media Freedom Act (EMFA), containing rules to protect media pluralism and independence in the EU; and transparency of political advertising, with the Transparency and Targeting of Political Advertising Regulation (TTPA).

 

By working together, these initiatives are set to bring more transparency and, as a result, more accountability in online content removal, potentially reducing the possibility of abuses and censorship.

 

Potential Impact of the Digital Services Act beyond Europe and in East Asia

 

The DSA has been drafted as one of the EU’s top priorities concerning its digital agenda. Therefore, the EU has been betting on the new rules becoming an international golden standard that could push forward beyond the EU. The EU has considered the General Data Protection Regulation (GDPR) an example of the so-called “Brussels effect” and wishes to replicate its success with platform regulation. Thus, it is very likely that the EU will promote the legislation as a best practice to inspire legislation in other countries, including within East Asia.

 

It is however unclear whether exporting a piece of legislation in different contexts could represent an effective approach in different geopolitical contexts. As highlighted previously, the DSA is far from an isolated set of rules and works. It operates with the previously mentioned complementary initiatives, including the Code of Practice on Disinformation, the EMFA, the TTPA and even the GDPR. Additional activities such as promoting media literacy and sufficient funding for independent fact checkers also remain relevant to support the DSA framework and will make or break the effectiveness of the DSA against disinformation.

 

Another issue with exporting the DSA is that governments could misuse replicated provisions to restrict freedom of expression. For example, DSA provisions originally meant for illegal content could be misused to target content such as disinformation as a pretext to silence criticism and opposition. This scenario is not unlikely and has already manifested itself in the EU with Commissioner Thierry Breton’s statements on the Israel-Hamas conflict referred to above, which could risk exacerbating censorship, with the incorrect justification that this solution has already been adopted and effective in the EU.

 

Finally, even if the DSA rules are replicated in a similar context and with the same scope, they could still be abused. Additionally, it is too early to say whether the proposed solutions will be effective, as the rules have not yet been fully implemented.

 

On the other hand, two positive elements to consider as potential inspiration from the EU law would be the clear scope limited to illegal content and the increased transparency that will make more data available which could indicate how to deal with this issue further.

 

Conclusions

 

With disinformation being a major risk for online civic discourse and free elections, and given the unstable international situation, it is essential to continue monitoring the DSA implementation and draw conclusions from available data. While the DSA model still needs to be proven effective, it might represent a guideline and potential best practice for Asian countries by providing them a possible solution to create a healthier and more democratic online sphere. On the other hand, the EU can also learn from the experience in other countries, especially from the abuse of content regulation laws to make sure the enforcement of the DSA is limited to illegal content and does not interfere with free speech by justifying censorship. ■

 

References

 

Chau, Thompson, and Dominic Oo. 2022. “Myanmar renews plans to curb internet usage with VPN ban.” Nikkei Asia. January 21. https://asia.nikkei.com/Spotlight/Myanmar-Crisis/Myanmar-renews-plans-to-curb-internet-usage-with-VPN-ban

 

European Digital Rights. 2020. “French Avia law declared unconstitutional: what does this teach us at EU level?” June 24. https://edri.org/our-work/french-avia-law-declared-unconstitutional-what-does-this-teach-us-at-eu-level/

 

European Parliament. 2022. “Digital Services Act: agreement for a transparent and safe online environment.” April 23. https://www.europarl.europa.eu/news/en/press-room/20220412IPR27111/digital-services-act-agreement-for-a-transparent-and-safe-online-environment

 

Goujard, Clothilde. 2021. “German Facebook ruling boosts EU push for stricter content moderation.” Politico. July 29. https://www.politico.eu/article/german-court-tells-facebook-to-reinstate-removed-posts/

 

Goujard, Clothilde, and Nicolas Camut. 2023. “Social media riot shutdowns possible under EU content law, top official says.” Politico. July 10. https://www.politico.eu/article/social-media-riot-shutdowns-possible-under-eu-content-law-breton-says/

 

Government of the United Kingdom. 2022a. “Online Safety Bill: supporting documents.” March 17. https://www.gov.uk/government/publications/online-safety-bill-supporting-documents

 

______. 2022b. “Overview of expected impact of changes to the Online Safety Bill.” January 18. https://www.gov.uk/government/publications/online-safety-bill-supporting-documents/overview-of-expected-impact-of-changes-to-the-online-safety-bill

 

Guest, Peter. 2021. “Malaysia’s brand-new “fake news” law is built to silence dissent.” Rest of World. March 15. https://restofworld.org/2021/malaysias-brand-new-fake-news-law-is-built-to-silence-dissent/

 

Hoboken, Joris van, and Ronan Ó Fathaigh. 2021. “Regulating Disinformation in Europe: Implications for Speech and Privacy.” UC Irvine Journal of International, Transnational, and Comparative Law 6, 1: 9-36.

 

Luong, Dien. 2018. “Vietnam’s Internet is in trouble.” The Washington Post. February 19. https://www.washingtonpost.com/news/theworldpost/wp/2018/02/19/vietnam-internet/

 

Meyers, Zach. 2023. “Breton’s megaphone enforcement is no way to tackle disinformation.” Euractiv. October 17. https://www.euractiv.com/section/media/opinion/bretons-megaphone-enforcement-is-no-way-to-tackle-disinformation/

 

Ong, Jonathan Corpus. 2021. “Where the State is the Biggest Bad Actor and Regulation is a Bad Word.” Social Science Research Council. https://items.ssrc.org/disinformation-democracy-and-conflict-prevention/southeast-asias-disinformation-crisis-where-the-state-is-the-biggest-bad-actor-and-regulation-is-a-bad-word/

 

Reuters. 2021. “Thailand bans “false messages” amid criticism of handling of coronavirus.” July 30. https://www.reuters.com/world/asia-pacific/thailand-bans-false-messages-amid-criticism-handling-coronavirus-2021-07-30/

 


 

[1] Not all hate speech is considered illegal everywhere, as it often depends on national legislation.

 


 

Sofia Calabrese is a Digital Policy Manager at European Partnership for Democracy.

 


 

Typeset by Hansu Park, Research Associate
    For inquiries: 02 2277 1683 (ext. 204) | hspark@eai.or.kr
 

Major Project

Center for Democracy Cooperation

Democracy

Detailed Business

Asia Democracy Research Network

Related Publications