What Article 34 of the DSA Could Have Said
Article 34 of the EU Digital Services Act only applies to a small number of the world’s largest online platforms, but its implementation over the past two years provides an excellent window into what mandatory human rights assessments for large companies in any industry could become.
Article 34 requires companies to undertake “systemic risk assessments” at least annually, while Article 42 mandates that these assessments be published. The first two rounds of these assessments have now been published, and many civil society organizations and experts have reviewed their strengths and weaknesses.
However, one aspect that I have found curious all along has been the impact of the text of Article 34 on how these assessments were conducted in practice. Over the years, I have been involved in dozens of voluntary human rights assessments, and one notable difference with these mandatory assessments is how every word in Article 34—indeed, every comma—was dissected by companies for its meaning.
This attention to detail got me thinking: How did the specific text of Article 34 stand up to scrutiny in practice? Taking a moment to review this question will help social media companies evolve their compliance efforts; it may also inform those redrafting the EU Corporate Sustainability Reporting Directive (CSRD) and Corporate Sustainability Due Diligence Directive (CS3D) as part of the EU omnibus simplification package.
My overall observation from both reading and participating in assessments is that Article 34 held up well and that specific phrases used by its drafters shaped essential aspects of how the assessments were undertaken in practice.
For example, the requirement for assessments to evaluate “any actual or foreseeable negative effects on the exercise of fundamental rights” ensured that companies indeed reviewed the impacts on all human rights, as expected by the UN Guiding Principles on Business and Human Rights (UNGPs). While some rights were highlighted explicitly in Article 34, compliance necessitated a more comprehensive approach.
Similarly, the use of the phrase “severity and probability” to describe the required assessment criteria made it clear that impacts on people and society, rather than on the business, should shape the prioritization of risks by companies.
However, several quirks in Article 34 had a disproportionately adverse impact on the assessment methodologies used by companies, resulting in only partial alignment with well-established standards of responsible business conduct and reports that were less comparable than they could have been. The following two quirks were most significant:
Prioritization Criteria: The term “severity” in the phrase “taking into consideration their severity and probability” was not defined, and as a result, different companies applied different definitions.
Recital 79 of the DSA attempted a non-binding definition, stating that companies could “assess whether the potential negative impact can affect a large number of persons, its potential irreversibility, or how difficult it is to remedy and restore the situation prevailing prior to the potential impact.” However, as those familiar with the UNGPs will spot instantly, this appears to be a bungled attempt at describing the UNGPs severity criteria of scope (i.e., number of people), scale (i.e., gravity of harm), and remediability (i.e., ability to make good)—the scale criteria is entirely missing, while remediability is referenced twice!
I am unsure if this was deliberate or a drafting error, but it led to different companies using different severity definitions. Some companies (e.g., X) used the UNGPs criteria, some companies (e.g., Google) applied a variation of the UNGPs criteria by combining them, and other companies (e.g., Meta) used different criteria entirely.
The simple fix (see edits below): Define severity via reference to the UNGPs.
Risk Taxonomy: The requirement to evaluate “any actual or foreseeable negative effects” for exercising fundamental rights (Article 34 1b) was hugely beneficial, ensuring good alignment between DSA systemic risk assessments and human rights assessment best practices.
However, Article 34 complicated matters by listing fundamental rights as just one of four requirements. The other three requirements were a cluster of issues around illegal content (Article 34 1a), civic discourse (Article 34 1c), and public health (Article 34 1d). These four requirements were listed as if they were different and of equal significance, even though the three other clusters are central to the realization of human rights and would be captured by a rights-based analysis anyway.
This led to all sorts of oddities, such as companies presenting and contrasting risk results in four categories (illegal content, fundamental rights, civic discourse, and public health) as if they were somehow distinct and comparable with each other when they are not. This was thoroughly confusing for practitioners encountering human rights for the first time, as many were.
The simple fix (see edits below): Require an evaluation of “any actual or foreseeable negative effects for the exercise of fundamental rights” and list the other items as topics that must be included in this evaluation.
There are many important topics to discuss regarding the implementation of mandatory human rights assessment requirements, such as how to address the concern that companies simply document existing practices rather than do anything authentically new or meaningful.
However, my hope in this piece has been to illustrate how minor changes in wording in the drafting phase can significantly affect how risk assessments are implemented in the real world. With that in mind, I thought it only fair to attempt a revised version of DSA Article 34, which you can find below. As you will see, the changes are small in volume but quite significant in substance.
I also hope that those involved in redrafting and simplifying the CSRD and CS3D take note of the impact word choices can have on practical implementation and adhere as closely as possible to well-established standards of responsible business conduct, especially the UNGPs and OECD Guidelines.
My advice to companies required to comply with DSA Article 34 is to implement this revised model, which achieves compliance while being more aligned with both well-established best practices in responsible business conduct, such as the UNGPs, and upcoming regulatory requirements, such as the CS3D.
=================
Edits to Improve DSA Article 34
(1) Providers of very large online platforms and of very large online search engines shall diligently identify, analyze, and assess any systemic risks in the Union stemming from the design or functioning of their service and its related systems, including algorithmic systems, or from the use made of their services.
They shall carry out the risk assessments by the date of application referred to in Article 33(6), second subparagraph, and at least once every year thereafter, and in any event prior to deploying functionalities that are likely to have a critical impact on the risks identified pursuant to this Article.
This risk assessment shall be specific to their services and proportionate to the systemic risks, taking into consideration their severity (as defined by the UN Guiding Principles on Business and Human Rights) and probability, and shall include the following systemic risks:
the dissemination of illegal content through their services;
The risk assessment shall evaluate any actual or foreseeable negative effects for the exercise of fundamental rights, as enshrined in the EU Charter. This evaluation should include in particular the fundamental rights to human dignity enshrined in Article 1 of the Charter, to respect for private and family life enshrined in Article 7 of the Charter, to the protection of personal data enshrined in Article 8 of the Charter, to freedom of expression and information, including the freedom and pluralism of the media, enshrined in Article 11 of the Charter, to non- discrimination enshrined in Article 21 of the Charter, to respect for the rights of the child enshrined in Article 24 of the Charter and to a high-level of consumer protection enshrined in Article 38 of the Charter.
any actual or foreseeable negative effects on civic discourse and electoral processes, and public security;
any actual or foreseeable negative effects in relation to gender-based violence, the protection of public health and minors and serious negative consequences to the person’s physical and mental well-being.
When evaluating actual or foreseeable negative effects for the exercise of fundamental rights the risk assessment should consider (a) civic discourse, (b) electoral processes, (c) public security, (d) gender-based violence, (e) public health, (f) physical and mental well-being, and (g) vulnerable groups and populations.
(2) When conducting risk assessments, providers of very large online platforms and of very large online search engines shall take into account, in particular, whether and how the following factors influence any of the systemic risks referred to in paragraph 1: (a) the design of their recommender systems and any other relevant algorithmic system; (b) their content moderation systems; (c) the applicable terms and conditions and their enforcement; (d) systems for selecting and presenting advertisements; and (e) data related practices of the provider.
The assessments shall also analyse whether and how the risks pursuant to paragraph 1 are influenced by intentional manipulation of their service, including by inauthentic use or automated exploitation of the service, as well as the amplification and potentially rapid and wide dissemination of illegal content and of information that is incompatible with their terms and conditions.
The assessment shall take into account specific regional or linguistic aspects, including when specific to a Member State.