The Internet
Over the past fifteen years or so, new public platforms have developed on the Internet and the content that users get to see or not see is often managed by algorithms.
The infrastructure of the Internet is based on innumerable machines, cables, software systems and automated processes. If one understands segments of the Internet, or the World Wide Web that it is built on as spheres of the general public, then access to it and the ability to inform and express oneself freely on it, touches upon the issue of social participation.
UPLOAD FILTERS AND AUTOMATED MODERATION
EU copyright law reforms, planned for in the spring of 2019, have instigated a number of major debates in Germany. Critics fear that Article 13 of the EU directive on copyright will make the implementation of so-called upload filters mandatory. Under Article 13, Internet Service Providers that allow users to upload and publish content would be forced to automatically examine the content for potential breaches of copyright law. Critics say that mistakes will be inevitable and that the law could infringe upon citation rights and freedom of expression. In the past, faulty decisions made by YouTube’s upload filters – used to detect copyrighted music and films – have repeatedly caused trouble. Despite substantial progress in the field of Machine Learning, upload filters cannot “understand” the context of videos. For example, is the music only played in the background at a public event? Does a video only show a short snippet of a movie for documentary purposes? The same problems occur with audio assistants such as Alexa or Siri. Anyone who has ever tried “talking” to these devices knows that it often takes a long time before such systems can detect irony or other nuances of human communication, never mind being able to correctly interpret them.
Any plans and regulations (see box on the Network Enforcement Act – NetzDG) that deal with the automated moderation of so-called content are as problematic as upload filters. All fully automated filtering of content, e.g. on social media, increases the risk of operators blocking and deleting more, rather than less, content. Automated procedures may be instructed to “over-block” in order to avoid potential fines. In this context, the use of ADM systems needs to be viewed critically because freedom of expression and information might be infringed. In particular, this affects young people, because young content producers find it more difficult than older people to find a voice in traditional media due to their relatively lower professional status or smaller personal networks.
PLATFORMS AND INTERMEDIARIES
At the moment, the debate about large social media platforms focuses on services such as Facebook and YouTube. Through content control, often referred to as “curation” – which is mostly unintelligible to outsiders – these platforms severely influence what content users get to see. The platform operators pursue this goal to encourage users to stay as long as possible on the platform, and to comment and recommend content. That way, the users get to see more advertisements and therefore increase revenue for the operators.
For a long time, critics have been complaining that services such as Facebook and YouTube – as opposed to traditional publishers – have only limited legal responsibility for the content that is published on their sites and customized for their users. However, it is increasingly recognized that platforms represent a new category of services that can neither be equated with traditional publishing models nor reduced to the simple provision of technical infrastructure. Hence, digital platforms have come to be seen as “intermediaries” that stand as matchmakers between the producers of the content on the one hand and the readers and viewers on the other. The latter – and this is a decisive characteristic – can also be producers of content. It is beyond doubt that automated (preliminary) decision-making systems play a dominant role in determining the way these producers can take part in discourse. There is a strong implication that, due to the sheer number of users, the leading services such as Facebook or YouTube represent a substantial part of media publicity. Therefore, a significant part of the public sphere is (co-)determined by ADM.
Regulations such as the NetzDG (see box) or the drafts for the EU directive on copyright law (see above) and regulation for the prevention of terrorist content online implicitly ensure that automated systems gain a greater influence on people’s participation. The liability regulations that are suggested would leave only one choice for many intermediaries: Either they radically reduce or cancel the services they offer, or they use filter software that makes automated preliminary decisions on what content is published and what is not.
TRADITIONAL MEDIA
The debate on the effects of filters on the Internet is often reduced to aspects of disinformation through “fake news”, filter bubbles (“echo chambers”) and “hate speech”. However, even the very detailed counting of access numbers for journalistic online services affects the content and most likely plays a great part in what communication researchers call tabloidization or the “softening of news” e.g. that media outlets produce ever more entertainment content at the expense of information. Strategies such as “click-baiting” – meaning the optimization of headlines and opening paragraphs to receive as many hits as possible – without hesitating to capitalize on hyperbole and false promises, have become increasingly possible due to automated analytics systems.
Those who only see this critically with services such as Buzzfeed, Bento, Vice or the Huffington Post, misapprehend that traditional newspapers such as Der Spiegel, Zeit, Süddeutsche Zeitung and Frankfurter Allgemeine Zeitung spend a lot of money and resources on highly developed data evaluation for marketing their own content online in order to increase advertising revenue. Today, humans are still involved in the editorial process, but the data-driven and automated optimization for clicks and hits (search engine optimization – SEO) also shapes these digital services.
Network Enforcement Act (NetzDG)
In the autumn of 2017 the Act to Improve Enforcement of the Law in Social Networks (Network Enforcement Act – NetzDG) came into force in Germany. It is meant to counter hate speech and punish “fake news” on social networks. Among other provisions, the law stipulates that operators of social networks such as Facebook, Twitter and YouTube have to offer their users a simple reporting system. Content that is obviously illegal must be deleted or blocked within 24 hours of it being reported. In the case of infringement, fines of several million Euros can be imposed. If the number of complaints exceeds 100 per annum, the operators must present a report on complaints and their measures to block or delete content every six months [LINK].
The introduction of the NetzDG was controversial. Some critics were concerned that the law was passed quickly so that it would come into force before parliamentary elections in 2017. It was criticized because the law allowed illegal content to be dealt with by pri-vate companies. Concerns were also voiced that premature deletion (“over-blocking“) in order to avoid fines could infringe upon freedom of expression. Supporters of the NetzDG suggested that people who had previously abstained from using social networks due to violent and degrading language (“digital violence “) could now participate.
Report on complaints received and deletions made by Google/YouTube: https://transparencyreport.google.com/netzdg/.
Report on complaints received and deletions made by Facebook: https://de.newsroom.fb.com/news/2019/01/facebook-veroeffentlicht-zweiten-netzdg-transparenzbericht/.
Report on complaints received and deletions made by Twitter: https://transparency.twitter.com/en/countries/de.html.