The Buzz Surrounding the Federal Network Agency: Understanding Trusted Flaggers’ Function

The Federal Network Agency in Germany has approved the first “trusted flaggers” to identify potentially illegal content on social networks, sparking concerns about state censorship. Critics worry that this initiative, linked to the Digital Services Act, could lead to the removal of lawful content under the guise of combating hate speech and disinformation. However, experts clarify that trusted flaggers only expedite reporting processes, leaving final decisions to the platforms, which must still comply with national laws.

The Federal Network Agency has sanctioned the initial trusted flaggers, allowing them to notify social media platforms about potentially illegal content. This development has raised concerns about possible state censorship—let’s examine the facts.

The announcement by the Federal Network Agency in early October regarding the approval of the ‘REspect!’ trusted flagger has sparked widespread discussion on social media. Concerns have been voiced about a ‘Digital Stasi,’ ‘green censorship,’ and a ‘modern denunciation’ culture. Critics fear that the trusted flaggers could be used to remove content deemed undesirable, particularly given that the agency is led by Green politician Klaus Müller. Nevertheless, many assertions about the role of trusted flaggers are misleading, often exaggerating their capabilities.

Understanding Trusted Flaggers

In German, trusted flaggers are referred to as ‘trusted whistleblowers.’ The EU has implemented this concept within the Digital Services Act (DSA), which applies across all member states. The DSA took effect in November 2022 and was integrated into German law in May of the current year. In Germany, the Federal Network Agency acts as the approving authority for trusted flaggers.

The DSA’s primary objective is to expedite the removal of illegal content from online platforms, including social networks like Facebook and e-commerce sites such as Amazon. The act focuses on serious offenses, including child sexual abuse imagery and counterfeit goods, as well as ensuring the quicker removal of hate speech and insults.

The role of trusted flaggers is to assist in ‘identifying potentially illegal content and alerting online platforms,’ as stated by the EU Commission. Organizations aiming to be recognized as trusted flaggers must exhibit ‘specific expertise and competence’ in detecting and reporting illegal content. It’s also essential for them to maintain independence from online platforms to guarantee ‘careful, precise, and impartial reporting.’

According to Anna Bernzen, a law professor specializing in digitalization at the University of Regensburg, the rationale behind this initiative is that the flagged content has been thoroughly evaluated beforehand, thereby increasing the likelihood of accurate reporting from trusted flaggers compared to average users.

Prioritization of Notifications

The DSA stipulates that content alerted by trusted flaggers should be ‘prioritized by online platforms and handled promptly.’ However, some users mistakenly interpret this to mean that platforms must instantly remove reported posts. Bernzen clarifies that while these reports receive priority, the ultimate responsibility for content decisions lies with the platforms themselves.

From a legal standpoint, trusted flaggers function more as facilitators than arbiters. The DSA does not impose strict timelines for how quickly online platforms must act on these reports, intentionally allowing flexibility. For obvious cases of illegal material—such as child exploitation—prompt action is expected, whereas disputed cases may necessitate a more complex evaluation considering context.

Concerns about Censorship

Critics express worry that trusted flaggers might lead to the removal of legal content. The Federal Network Agency’s communication underscores the obligation of platforms to respond swiftly to reports. However, terms like ‘hate speech’ and ‘fake news’ are not classified as criminal offenses under German law, which raises questions about the potential misuse of this system to report simply unpopular opinions.

Müller clarified that the term ‘illegal’ encompasses hate speech and misinformation only in a legal sense, emphasizing that platforms must process reported content following applicable laws and guidelines. Ultimately, the courts will have the final say in such determinations.

The Federal Network Agency has reinforced that reports from trusted flaggers must pertain to content with potential criminal implications, obligating platforms to act in accordance with national laws when deciding on such reports.

No Automatic Penalties

Many users are skeptical of trusted flaggers due to the DSA’s provision allowing for fines of up to six percent of annual revenue for non-compliance. However, Bernzen states that a single erroneous decision by a platform based on trusted flagger reports won’t directly incur penalties. Platforms retain the authority to reject flagged content if, upon review, they deem it not illegal. Non-implementation of necessary systems for prioritizing these reports constitutes a violation of the DSA.

Additionally, the Federal Network Agency affirms that larger platforms, welcoming over 45 million users monthly, fall under the jurisdiction of the Irish coordinator and EU Commission when facing DSA-related grievances.

Torsten Kraul, a partner at the law firm Noerr, notes that trusted flaggers must provide yearly transparency reports regarding their activities. They need to demonstrate their credibility consistently; failure to adhere to standards can lead to losing their trusted flagger status

Latest