In Germany, freedom of speech is a right enshrined by article 5 of the Constitution. Despite this, the NetzDG law, which took effect on 1 January 2018, requires social media sites operating throughout the country to remove hate speech and fake news within 24 hours of it being reported to the host platform. The law also requires major social media websites, including Facebook, Twitter and YouTube, to implement a comprehensive process to ensure complaints are resolved within an efficient timeframe. While many social media sites already have established complaints procedures in place, Germany’s NetzDG law appears to be the first to impose pseudo-regulatory burdens on social media websites for content published by their users.
Proposals for legislation targeting fake news and online hate speech are not unique to Germany. As recently as 3 January, French president Emmanuel Macron announced his support for legislation banning fake news during political election campaigns. The potential for prominent online platforms to sway election results became highly contentious in the lead-up to the US Presidential election in 2016. Many commentators believe that multiple factors, including the 24-hour news cycle, the increasing number of people using social media feeds as their primary source of news, and Facebook’s algorithms substantially influenced voter perceptions by determining the content to which they were exposed.
In a statement released on 22 January, Facebook’s Product Manager for Civic Engagement, Samidh Chakrabarti, acknowledged that “Russian entities [had] set up and promoted fake pages on Facebook… essentially using social media as an information weapon.” In its approach to the difficult balancing act of enabling free speech and preventing the viral spread of misinformation, Chakrabarti said that Facebook will “soon require organisations running election-related ads to confirm their identities [to] show viewers of their ads exactly who paid for them.”
Chakrabarti also highlighted the effect of fake online news in relation to the Australian political system, referring to articles circulated by Facebook in April 2017 which falsely claimed that Labor MP Dr Anne Aly – the first female Muslim to sit in the Australian Parliament – had refused to lay a wreath at an Anzac Day ceremony. In an interview with Fairfax Media following a barrage of cyber harassment, Dr Aly reflected, “you want this platform of free and open discourse, but… it can have those dangerous consequences. If it becomes predominantly an echo chamber, that goes against the grain of democracy, because it doesn’t expose people to other views and facts.” Similar sentiments were shared by Chakrabarti, who noted, “this medium’s being used in unforeseen ways with societal repercussions that were never anticipated.”
Whether the NetzDG law marks the dawn of a new age of online user protection, or, more ominously, a curtailment of free speech bordering on state censorship, the extent to which governments should intervene in internet publication has become increasingly difficult to assess. Legislators and academics have noted that internet technology advances so rapidly that laws cannot effectively keep up, as demonstrated by the widespread use of VPN technology by citizens evading the Chinese government’s internet firewall.
German internet users wishing to view ‘fake news’ or hate speech removed by social media platforms could, if they really wished to, use VPN technology to circumvent fake news bans instigated by German Facebook, by appearing to be accessing websites from locations abroad, where the NetzDG law does not apply. Although entertainment streaming services like Netflix have upgraded their systems to detect when audiences are using VPNs to access entertainment catalogues from other countries, it’s unclear whether governments banning fake news will be able to impose such standards on social media websites.
While the NetzDG law is yet to be proven effective, one thing remains clear: the position of the German government and President Macron on ‘fake news’, political polarisation and online hate speech has not gained real traction in the Australian legal context. It was, after all, only four years ago that senator George Brandis infamously stated that ‘people do have a right to be bigots.’ While the Racial Discrimination Amendment Bill 2016 was defeated by the Senate in 2017, there have been no substantive attempts by Parliament to take protection against hate speech a step further by interfering with the autonomy of social media websites. That being said, if Chakrabarti’s promise to neutralise risks is fulfilled, and other social media providers adopt similar approaches, any perceived need for domestic legislation may be nullified through self-regulation by the social service providers.
By Claire Nielson