Atlas der Automatisierung

Automatisierte Entscheidungen
und Teilhabe in Deutschland

Der Atlas der Automatisierung wird aktuell nicht mehr aktualisiert.
Die Daten sind daher nicht mehr auf dem neuesten Stand.


Atlas of Automation –
Automated decision-making and
participation in Germany

1. edition, April 2019


AW AlgorithmWatch gGmbH
Bergstr. 22, 10115 Berlin

Lorenz Matzat


Lukas Zielinski
Miriam Cocco
Kristina Penner
Matthias Spielkamp
Sebastian Gießler
Sebastian Lang
Veronika Thiel

Ralf Grötker

Copy-editing (German):
Karola Klatt

Translation from German:
Maike Majewski

Copy editing (English)
Graham Holliday

Layout & Artwork:
Beate Autering,

With support from the Bertelsmann Stiftung
and the Mozilla Foundation.

This publication is licensed under a Creative Commons Attribution 4.0 International License


Education, Stock Trading, Cities & Traffic

In this chapter we collect and describe further examples of procedures and processes that use automated decision-making, that can impact on aspects of social participation and that are present in different parts of society.


School administration

Since 2017 an Intelligent Zoning Engine (IZE) has been employed by the education authority in the Berlin District of Tempelhof-Schöneberg. According to the manufacturer, discussions are underway to extend the use of this tool to other districts. The IZE allows for the delineation of school catchment areas based on algorithmic optimization i.e. depending on their home address first-year pupils are allocated to specific schools. Together with the details of the maximum number of pupils a school can take and the home addresses of prospective pupils, the IZE also takes into account demographic data in the smallest possible statistical units (on the level of blocks of houses). The manufacturer of the IZE claims that the tool saves a lot of time for the school administrators. School catchment areas are automatically determined by criteria set by the users, such as the shortest routes to school. The company says that “further dimensions for optimization are applicable (socio-economic composition etc.)” This means that the software could potentially be used to promote, or reduce, segregation, e.g. according to origin or income. [LINK]

University Admission

The admission procedure for medicine will change in the summer term 2020. Besides the school leaver’s grade average (Abitur), universities will have to devise another independent selection criterion. This could be the TMS (admissions test for medical degree courses) – which was developed at the request of the Standing Conference of the Ministers of Education and Cultural Affairs – and consists of training in a medical profession or an aptitude test by the respective university. In the course of this, the software of the Trust for Admission to Higher Education (Stiftung für Hochschulzulassung – ZVS) will be readjusted. Thus, from 2020, factors other than the Abitur grade are going to be taken into account and weighted in the allocation of university places.


In stock trading, so-called “high frequency trading“ (HFT) has been in operation for about ten years. Computers trade bonds within milli- or microseconds, managed completely autonomously by software: Basically, the profit is made by exploiting minimal share price fluctuations through rapid buying and selling. At least one short term stock market crash, the so called Flash Crash 2010, has been attributed to the operation of HFT. This event clearly shows how such automated trading systems can severely damage national economies and directly affect various aspects of partici-pation.

In Germany, the High Frequency Trading Act which was introduced in 2013 contains, among other aspects, provisions for “systems and risk controls” and a “flagging obligation”. The stipulations on HFT could serve as reference points for potential approaches to regulation in other branches and sectors. [LINK]


Concepts for Smart Cities have been discussed for some time (for rural areas, the equivalent is the “Smart Country”). In essence, infrastructure data (electricity, water, traffic etc.) is gathered and additional data (air quality, noise etc.) is measured by sensors. This data is then used to in-terconnect infrastructure (“Internet of Things“). At the same time, the movements and behavior of individual people are recorded through their mobile devices and also considered in the evaluation. This enables new services (for example: digitally controlled parking zones, personalized ad-vertisements in public spaces) to be offered. In addition, it allows city administrators to partially automate administrative actions and to manage resources more efficiently.

Usually, democratic opportunities for participation in the sense of Open Government are not part of Smart City concepts. Critics complain that these concepts, to whose development larger tech companies often contribute, are really about setting up surveillance infrastructure as well as new forms of marketing in public spaces. This is countered by the example of the Smart City concept in Barcelona which was developed in cooperation with the inhabitants and is intended to primarily serve their needs. [LINK]


Traffic Lights

In Germany, traffic lights have directed the flow of traffic for about a century. These fairly simple automats – that in some places today are interconnected by traffic control centers – can mainly be found in urban areas where they manage the relationship between pedestrians and mo-torized and non-motorized vehicles. In most cases, this implements a car-focused traffic policy that affects people’s health (air pollution, accidents) and the environment.

Navigation Devices

Ever since the number of smartphones boomed after 2007, navigation devices or apps have appeared in almost every vehicle. They register the location of a vehicle based on satellite data (GPS, Galileo, Glonass) and calculate the best and fastest route to the desired destination. Whether this affects residential or nature conservation areas is not part of the consideration.

Engine Control (Dieselgate)

In 2015 it became known that German car manufacturers, in particular Volkswagen AG, had ordered the engine control of some types of diesel powered vehicles to be programmed in such a way that the engine automatically ran differently in test situations in order to minimize exhaust emissions. The control system incorporated in these vehicles implemented the “decision” to take part in this fraud. This affects, for example, the air quality in cities and thus the health, mainly of older people and children.

Autonomous Driving

The media attention surrounding self-driving cars has declined a little in recent times as it will still take a while before this vision of the future becomes a reality. Nevertheless, an Ethics Commission on Automated Driving set up by the Federal Ministry of Transport and Digital Infrastructure met between 2016 and 2017. In its final report, the commission presented 20 “ethical rules” related to self-driving cars. Among them, they state that decisions of “life against life” (unavoidable accident situations) cannot be “unequivocally ethically programmed”. Any distinction between individuals based on personal features (age, gender, physical or mental constitution) is impermissible. In an amendment to the Road Traffic Act (StVO), the foundation was laid for the introduction of autonomous driving. However, the changes mainly dealt with liability issues and not with the above mentioned ethical issues that the Ethics Commission raised. It remains doubtful whether these issues can be resolved by regulation at all, and it also remains an open question as to how autonomous driving can be realized in principle when it interferes with fundamental rights.

Next chapter: Imprint (of the report)

Previous chapter: Topic: Security & Surveillance


Security & Surveillance

Whether in the physical or the digital world, when it comes to migration, crime, terror and war, software is involved in decision-making.

Sorting, rejecting and predicting – those are the tasks that ADM systems are most commonly expected to perform in the field of security and surveillance. The most prevalent uses are associated with cameras, Internet traffic surveillance, predictive policing, automated border controls and autonomous weapons systems. However, if the police and other security agencies delegate parts of their work to machines and programs, the results can quickly lead to false suspicions. This infringes on freedom of movement and the presumption of innocence which are core elements of participation.


By a “Digitisation Agenda 2020“ the Federal Office for Migration and Refugees (Bundesamt für Flüchtlinge und Migration – BAMF) aims to tackle problems related to its procedures [LINK]. In 2016, an „integrated identity management“ was introduced. Today it contains several modules that are available for supporting case managers in their decisions. The system is mainly aimed at finding out whether the details given by those seeking protection are plausible. For example, software is used to try and recognize the language of origin of a person from audio recordings. Initially, the error rate of the so-called speech biometrics was approximately 20 per cent; according to BAMF this figure could have been reduced to 15 per cent by now. By mid November 2018 , the procedure had been used about 6000 times. The software that is used has its origins in military forensics, the secret services and the police [LINK]. This software is able to analyze telephone data, past connection data and saved telephone numbers. The BAMF claims that refugees give it their permission to access their telephones voluntarily. In 2018, the insights gained from the analysis of thousands of refugee’s telephones resulted in usable results in less than 100 cases. Other software is employed by the BAMF to compare photographic portraits and various possible transliterations of Arabic names into Romanized letters. The BAMF believes that the use of these automated procedures has been a success. However, critics think that the cost of the procedures and the number of errors are too high. They also complain about the lack of transparency in the way the software systems function and the lack of scientific monitoring used to evaluate the effectiveness of the procedures.

Since 2013 the EU has been using the ‚Smart Border‘ control system at its borders. As part of the EU-wide “automated border control systems”, Germany already uses completely automated passport controls (EasyPASS) at some German airports. Meanwhile, an entry permit system for the Schengen Area (European Travel Information and Authorization System – ETIAS) is currently being worked on and will come into force from 2021.. As for the Visa Information System, a structure for Entry/Exit procedures is currently being established. It is supposed to create a database by interacting with the Passenger Name Records which are already used in air traffic. The plan is that entry/exit records to and from the Schengen Area will be recorded and stored centrally from 2020. Biometric data will play an essential part in face recognition. In addition, the EU Commission is financing an experiment into lie detection to the tune of 4.5 Million Euros [LINK]. The experiment, called “IBORDERCTRL”, will run until August 2019 and is being tested at the Hungarian and Greek borders, among other locations. IBORDERCTRL consists of a computer-animated border guard on a screen which asks questions to the person entering the EU country. The interviewee’s “micro gestures” are recorded by a camera and analyzed to decide whether or not they are lying. Some critics say that IBORDERCTRL is based on “pseudo-science” [LINK].


A collaboration between the Deutsche Bahn AG train company, the German Federal Police (Bundespolizei) and the Federal Criminal Police Office (Bundeskriminalamt – BKA) aimed at using camera surveillance to detect criminal suspects ended in mid-2018. The camera surveillance tests took place at Berlin’s Südkreuz train station and ended with questionable results. In a section of the station that is seen as a test area for new technologies by Deutsche Bahn AG, different software systems were set the task of filtering out criminal suspects via face recognition. The software systems were tested for almost a year. Officially, the results were seen as a success because the detection rate was 80 per cent. In addition, the false alarm rate (FAR) was under one per cent. The Chaos Computer Club (CCC) declares that, in reality, the detection rate of 80 per cent was too high. They said that this high rate of detection was only achieved when the results of all three of the software systems that were tested were joined together. At the Südkreuz station, which sees 90,000 passengers pass through per day, the FAR of circa 0.7 per cent would flag approximately 600 false suspects per day. The CCC also criticized the sample of test people claiming that it was hardly representative of the population at large in regard to age, sex and origin [LINK]. In the United States, a study showed that some of the face recognition systems that were used were not good at recognizing Afro-American women [LINK].

The number of false detections by recognition systems is increasingly becoming a civil rights issue. This is especially the case when considering the growing use of air drones (“Quadrocopters”) equipped with cameras and utilised by police at demonstrations. For example, during the investigation into the aftermath of the clashes around the G20 summit in Hamburg in 2017 the police searched through hours of image recordings using automated procedures.

The next test phase of the camera surveillance at Südkreuz station—in which ADM systems were going to be used to recognize objects such as suitcases and “unusual behavior” of people—was cancelled by Deutsche Bahn on financial grounds in early 2019. However, a similar test was started in Mannheim at the end of 2018. The Fraunhofer Institute of Optronics, System Technologies and Image Exploitation (IOSB) is providing the technology for an “algorithm-based video surveillance (system) in public space for combatting street crime”. The five-year project is supposed to increase the ability, for example, to recognize physical kicks or punches and alert police officers in operation centers about potential incidents. In total, 76 cameras in the inner city will be connected to the system [LINK].

High false detection rates seem to be the rule when it comes to automated tracing of license plates. In some federal states, for example in Bavaria, Hessen and Saxony, such systems have been installed permanently. In other federal states they are only employed selectively or not at all (yet). A pilot project run by the government of Baden-Württemberg in 2017 found detection errors in about 90 per cent of cases. Other federal states saw similar percentages due to the fact that they did not have the latest technology. The automated license plate recognition system in Baden-Württemberg was first acquired in 2011 [LINK].


At present, predictive policing systems are employed in six federal states. Apart from systems developed by the authorities themselves, systems developed by various private manufacturers are also in use. The basic goal of predictive policing is to use statistical analysis to identify areas where burglaries of apartments, business premises and car theft are likely to occur. The criminal prognosis is based on models such as the near-repeat-theory, which states that burglars tend to strike again near the location of a successful break-in. Using this system, patrols can be deployed more efficiently. It is unclear, however, whether such locatgion-based systems result in positive effects. An accompanying study by the Max Planck Institute for Foreign and International Criminal Law in Freiburg was unable to find any clear evidence of effective prevention or decrease in criminality during the test phase which ran between 2015 and 2017 in Stuttgart and Karlsruhe [LINK]. Regading participation and predictive policing, it would be better to examine whether predictive policing might help create re-enforcing effects that could lead to stigmatization in specific parts of some towns, cities or other areas. [LINK]

“Hessen-Data“ works as a person-related system instead of a place-based one. The software is provided by Palantir, a private software company from the USA, As far as it is known, this system combines data from social media with entries in various police databases as well as connection data from telephone surveillance in order to identify potential offenders. By “profiling”, it is intended to help identify potential terrorists and was acquired in 2017. Hessen’s government is planning to extend its deployment by using it to help detect child abuse and abduction. The necessary legal foundation for “Hessen-Data“ was provided by the Hessen Law on Police which was revised in 2018. An Investigative Committee, reporting to the Hessen Parliament, is currently trying to clarify issues around the acquisition of the system and look into questions relating to data protection. Apparently, the system is supervised by the staff of Palantir, who as a result might have access to private data related to individual citizens. [LINK]


Since the 2013 revelations of former CIA employee Edward Snowden, there is mounting evidence that the secret services of Western countries monitor Internet traffic on a global scale regardless of whether an individual is involved in suspicious activity or not. It is also a known fact that German secret services are collaborating in this activity. What software systems and procedures are being used is not known because the options for parliamentary control over these activities is fairly limited. Later in 2019, the Federal Constitutional Court (Bundesverfassungsgericht) will issue eight decisions related to this surveillance activity. Among other things, the decisions will focus on the extension of rights to the Foreign Intelligence Service of Germany (Bundesnachrichtendienst), on the retention of data and the surveillance of telecommunication and postal correspondence. [LINK]


Prompted by the preliminary work of some NGOs, UN bodies have been debating a worldwide ban on autonomous weapons systems since 2017. These include drones in the air, in water and on land that, under specific circumstances, execute deadly violence without further intervention from a human operator. As far as it is known, such completely autonomous or autarkical weapons systems are not yet in operation. However, air drones have been equipped with weapons systems that can perform at least some tasks independently. Their arsenals also include systems for the recognition of persons and of objects. An independent evaluation of their error rates does not seem to exist. The air force as well as the marines in the German army are using various types of air drones. Of those, at least one has the capacity for armament.


Dynamic Risk Analysis Systems

Dynamic Risk Assessment Systems (DyRiAS) are instruments produced by the German company “Institut Psychologie & Bedrohungsmanagement (Institute for Psychology and Threat Management – IPBm Projekt GmbH). The instruments give a risk assessment of potential violent acts by people in various social contexts (school, intimate partnership, workplace, Islamist terror etc.). According to the manufacturer, the results of their products are psychologically, as well as empirically, well-founded. Their basic assumption is that observable spirals of escalation precede violent acts. The analysis is supposed to find out “if a person is on a development path that might lead them to a potential attack”. DyRiAS offer an overview of the development of the threat over time while simultaneously creating case documentation. The risk assessment is based on the statistical evaluation of questionnaires that are filled in by case managers (e.g. police personnel).

DyRiAS systems are used by associations for the protection of women in Germany and Austria. In Switzerland, DyRiAS is used in combination with other risk management systems in preventive police work, for example to identify “potential attackers” (Gefährder*innen). According to research undertaken by the Swiss TV channel SRF, these instruments deliver high false alarm rates: Obviously, the software is configured with a tendency to overestimate the risk [LINK].

Next chapter: Topic: Education, Stock Trading, Cities & Traffic

Previous chapter: Topic: The Internet


The Internet

Over the past fifteen years or so, new public platforms have developed on the Internet and the content that users get to see or not see is often managed by algorithms.

The infrastructure of the Internet is based on innumerable machines, cables, software systems and automated processes. If one understands segments of the Internet, or the World Wide Web that it is built on as spheres of the general public, then access to it and the ability to inform and express oneself freely on it, touches upon the issue of social participation.


EU copyright law reforms, planned for in the spring of 2019, have instigated a number of major debates in Germany. Critics fear that Article 13 of the EU directive on copyright will make the implementation of so-called upload filters mandatory. Under Article 13, Internet Service Providers that allow users to upload and publish content would be forced to automatically examine the content for potential breaches of copyright law. Critics say that mistakes will be inevitable and that the law could infringe upon citation rights and freedom of expression. In the past, faulty decisions made by YouTube’s upload filters – used to detect copyrighted music and films – have repeatedly caused trouble. Despite substantial progress in the field of Machine Learning, upload filters cannot “understand” the context of videos. For example, is the music only played in the background at a public event? Does a video only show a short snippet of a movie for documentary purposes? The same problems occur with audio assistants such as Alexa or Siri. Anyone who has ever tried “talking” to these devices knows that it often takes a long time before such systems can detect irony or other nuances of human communication, never mind being able to correctly interpret them.

Any plans and regulations (see box on the Network Enforcement Act – NetzDG) that deal with the automated moderation of so-called content are as problematic as upload filters. All fully automated filtering of content, e.g. on social media, increases the risk of operators blocking and deleting more, rather than less, content. Automated procedures may be instructed to “over-block” in order to avoid potential fines. In this context, the use of ADM systems needs to be viewed critically because freedom of expression and information might be infringed. In particular, this affects young people, because young content producers find it more difficult than older people to find a voice in traditional media due to their relatively lower professional status or smaller personal networks.


At the moment, the debate about large social media platforms focuses on services such as Facebook and YouTube. Through content control, often referred to as “curation” – which is mostly unintelligible to outsiders – these platforms severely influence what content users get to see. The platform operators pursue this goal to encourage users to stay as long as possible on the platform, and to comment and recommend content. That way, the users get to see more advertisements and therefore increase revenue for the operators.

For a long time, critics have been complaining that services such as Facebook and YouTube – as opposed to traditional publishers – have only limited legal responsibility for the content that is published on their sites and customized for their users. However, it is increasingly recognized that platforms represent a new category of services that can neither be equated with traditional publishing models nor reduced to the simple provision of technical infrastructure. Hence, digital platforms have come to be seen as “intermediaries” that stand as matchmakers between the producers of the content on the one hand and the readers and viewers on the other. The latter – and this is a decisive characteristic – can also be producers of content. It is beyond doubt that automated (preliminary) decision-making systems play a dominant role in determining the way these producers can take part in discourse. There is a strong implication that, due to the sheer number of users, the leading services such as Facebook or YouTube represent a substantial part of media publicity. Therefore, a significant part of the public sphere is (co-)determined by ADM.

Regulations such as the NetzDG (see box) or the drafts for the EU directive on copyright law (see above) and regulation for the prevention of terrorist content online implicitly ensure that automated systems gain a greater influence on people’s participation. The liability regulations that are suggested would leave only one choice for many intermediaries: Either they radically reduce or cancel the services they offer, or they use filter software that makes automated preliminary decisions on what content is published and what is not.


The debate on the effects of filters on the Internet is often reduced to aspects of disinformation through “fake news”, filter bubbles (“echo chambers”) and “hate speech”. However, even the very detailed counting of access numbers for journalistic online services affects the content and most likely plays a great part in what communication researchers call tabloidization or the “softening of news” e.g. that media outlets produce ever more entertainment content at the expense of information. Strategies such as “click-baiting” – meaning the optimization of headlines and opening paragraphs to receive as many hits as possible – without hesitating to capitalize on hyperbole and false promises, have become increasingly possible due to automated analytics systems.

Those who only see this critically with services such as Buzzfeed, Bento, Vice or the Huffington Post, misapprehend that traditional newspapers such as Der Spiegel, Zeit, Süddeutsche Zeitung and Frankfurter Allgemeine Zeitung spend a lot of money and resources on highly developed data evaluation for marketing their own content online in order to increase advertising revenue. Today, humans are still involved in the editorial process, but the data-driven and automated optimization for clicks and hits (search engine optimization – SEO) also shapes these digital services.


Network Enforcement Act (NetzDG)

In the autumn of 2017 the Act to Improve Enforcement of the Law in Social Networks (Network Enforcement Act – NetzDG) came into force in Germany. It is meant to counter hate speech and punish “fake news” on social networks. Among other provisions, the law stipulates that operators of social networks such as Facebook, Twitter and YouTube have to offer their users a simple reporting system. Content that is obviously illegal must be deleted or blocked within 24 hours of it being reported. In the case of infringement, fines of several million Euros can be imposed. If the number of complaints exceeds 100 per annum, the operators must present a report on complaints and their measures to block or delete content every six months [LINK].

The introduction of the NetzDG was controversial. Some critics were concerned that the law was passed quickly so that it would come into force before parliamentary elections in 2017. It was criticized because the law allowed illegal content to be dealt with by pri-vate companies. Concerns were also voiced that premature deletion (“over-blocking“) in order to avoid fines could infringe upon freedom of expression. Supporters of the NetzDG suggested that people who had previously abstained from using social networks due to violent and degrading language (“digital violence “) could now participate.

Report on complaints received and deletions made by Google/YouTube:

Report on complaints received and deletions made by Facebook:

Report on complaints received and deletions made by Twitter:

Next chapter: Topic: Security & Surveillance

Previous chapter: Topic: Health & Medicine

Health & Medicine

ADM is used as an instrument in diagnostics, for therapy decisions, and for the allocation of resources in the health sector.

Not only are clinics, doctors’ surgeries and health insurance companies using ADM in the medical sphere, so are individual citizens. Private users tend to use ADM in the form of apps that go beyond simply looking at the medical angle and enter the realm of well-being and self-optimization. ADM in clinics and surgeries is often embedded in complex software solutions and adjusted to the prescriptions of the Ordinance on Medical Devices and to the interaction with diagnostic tools, e.g. in radiology.


Apps can support medical professionals in diagnosing illnesses. Algorithms that evaluate image data such as computed tomography (CT), magnetic resonance imaging (MRI) or other medical data that help detect cancer or a need for prenatal medicine, for example, are particularly advanced. Scientific studies show that ADM systems can detect and interpret abnormalities in images more reliably than the trained human eye. Further uses of ADM include helping interpret genetic tests, aiding in robotic surgery and assisting in analyzing databases of scientific literature. [LINK]

Other ADM-supported applications address patients as end users. These aim to assist patients in the treatment of their disease. Most of these applications are smartphone apps which can be of great use, especially with chronic or long term illnesses. Some of these apps include simple reminders to take medica tion, while more complex systems provide close monitoring, such as blood sugar levels, to simplify the management of diabetic patients.

While on the one hand ADM-supported apps and other therapy aids significantly increase therapy success and allow chronically ill patients in particular to become more independent from constant therapy and control by doctors, they also bring with them a range of problems and risks. From a patient’s perspective it can seem problematic that the ADM-systems employed for diagnosis and therapy recommendations are not necessarily geared towards optimizing the patient’s benefit, but can also aim at increasing profits for those who use or market the ADM-system. This could happen through recommendations for tests, drugs and medical devices which in turn could be connected with additional costs or with excessive side effects for the patients. [LINK]

Regarding aspects of participation, the use of ADM in diagnosis is problematic for a number of reasons. Groups of patients who represent a minority in terms of some biological traits might find themselves systematically disadvantaged because the database used is insufficient for the respective group or leads to misjudgements. For example, in one study assessing the risk of heart disease, algorithm-based diagnosis led to both over and underestimation for patients who did not belong to the white majority of the population.


ADM-Systems are not only of use when optimizing diagnosis and therapy, they are also helpful in enabling a more efficient distribution of resources in the health sector.

One example: During the decision on the allocation of donor organs in the context of an organ transplant, patients are already prioritized according to specific parameters, including urgency and the chances of success. Employing ADM-based allocation systems could lead to self-fulfilling prophecies and thus to systematic disadvantages for some groups of patients. This could, in turn, also impact other medical intervention decisions. For example, patients with brain damage or infants who are born premature could virtually be excluded from some treatments due to the supposedly limited prospects of success. It is possible that an ADM system could follow such a practice. In this manner, decision-making pathways are reinforced even if they are not legitimized by reliable data on treatment results.

Private health insurance companies are another area where ADM is used. Here, it can help to calculate the individual risk for specific diseases which in turn can be used to adapt insurance policies. To a greater extent than ever before, policy holders could be divided into different risk groups that then have to pay different rates for their insurance. In regard to social fairness and participation, this could exacerbate existing inequalities. Health and lifestyle apps on smartphones could play an important role in this context (think of “quantified self”). Some German health insurance policies already make the use of such apps a prerequisite for receiving favorable insurance tariffs. This could discriminate against insurance holders who cannot use such an app or who would not benefit from it in regard to their tariff. In addition, the way in which the data is gathered and then used to adapt the tariff could also lead to discrimination.


Regulation of Digital Medical Products

ADM-based software systems that are employed in a clinical or outpatient setting have to be registered as Medical Devices. The approval and quality control of Medical Devices is regulated by the Federal Institute for Drugs and Medical Devices according to the Europe-wide Ordinance on Medical Devices.

Currently, health apps for smartphones that address individual consumers are rarely registered as Medical Devices and thus do not undergo any quality control. However, with the new version of the EU Ordinance on Medical Devices coming into force in 2020 the registration requirements for apps will be extended. Apps that take a distinctive diagnostic or therapeutic approach will be placed into a higher risk category. Whether the changes in the revised version of the ordinance are sufficient to remedy presumed deficiencies is currently being debated.

Next chapter: Topic: The Internet

Previous chapter: Labor



In the labor market, the potential for discrimination is particularly high. When ­systems of automated decision-making enter into the administration authorities of labor and unemployment, high vigilance is necessary. The same is true for recruitment processes, internal personnel management and staff performance monitoring.


Algorithm-based selection processes (“Robo-Recruiting”) [LINK] search applicant profiles for specific qualifications and keywords. When there are many applicants for a position, a preselection can thus be made. However, humans are usually involved in the recruiting process at a more advanced stage. Apart from the financial savings, the advantages of automation can consist in the selection process proceeding along clear cut criteria and in the elimination of sympathies or aversions of the personnel manager which are irrelevant to the job placement.

In a further step beyond the selection according to preset criteria, recruiting algorithms can be allowed to independently evaluate the qualifications of candidates, potentially aided by processes of Machine Learning. In 2014, the online giant Amazon began developing such software, but the tool was never employed and the project was cancelled in early 2017. The data that formed the foundation of the training of the algorithm-based software was based on Amazon’s recruitment practices over the previous ten years. During this time, mostly men were recruited into the tech sector. The software concluded that men should be preferred over women when it comes to filling job vacancies. Furthermore, the software reproduced other discriminating selection criteria [LINK].

Other procedures for quality assessment count on the evaluation of the personality based on psychometric properties such as voice [LINK]. IBM’s Watson technology “Personality Insights“ evaluates personal communication on social media or other digital formats in its analysis of personality traits.

Apart from it being questionable whether the processes applied are functional and efficient, it is a problem that the applicants are rarely informed when they are evaluated automatically. In fact, individual job applicants can only take action when their job application is rejected by refering to the Anti-discrimination Law (Antidiskriminierungsgesetz – AGG). The extent to which labelling or certification requirements could regulate the use of ADM systems in recruitment processes needs to be discussed.


Larger companies in particular use software in their personnel management, for example in payroll accounting, holiday planning or in the registration of sick days. Furthermore, external service providers offer software that helps to identify employees who are likely to look for jobs elsewhere. This is important for companies that are particularly concerned about talent retention. Other applications focus on employee performance monitoring based on various data samples that are generated in the course of everyday processes within the company. Other products offer procedures for continuous staff surveys in order to analyze team dynamics and the job satisfaction of individual employees.

Operators of digital platforms like Uber (taxi service), Foodora (food delivery service) or Helpling (placement service for cleaning personnel) also use software in order to replace middle management, customer service and accounting when acting as an agent for their self-employed members of staff. Planning of shifts, order allocation and performance control are automatically performed via a smartphone app. It would need further examination to find out which legal means already exist against the use of ADM or could be created for such freelance workers (keywords: “Gig-Economy“ and “Platform Economy”) who offer their services to digital platforms.

Since the beginning of 2018, AlgorithmWatch has been engaged in a research project on automated personnel management and Corporate Co-determination (Betriebliche Mitbestimmung) [LINK]. Apart from determining how common ADM systems are in Germany, the project also looks at the benefits and downsides these systems bring to employees. In addition, the project examines who has access to data and whether or not aspects of Corporate Co-determination are affected.


In Germany, the institutions that make up the employment agency (Arbeitsagentur – ARGE) and its so-called “Job Centers” employ a number of software systems. As some of the Job Centers are run solely by the local administrations (optional local management – Optionskommunen), it is hard to gain a full overview. In regard to the employment agency, the answer to a Minor Interpellation of the parliamentary group “Die Linke” in the autumn of 2018 clearly showed that some processes that contain ADM components are already in use or are in the planning stages. [LINK]

Among these are:

  • PP-Tools: These serve as the basis for “calculation aid for labor market opportunities” (Berechnungshilfe Arbeitsmarktchancen – BAC) which is used by a wider circle of people (at least 12,500) and “calculates the labor market opportunities of the client”. So far it has been impossible to find out how this was done. The kind of problems that such a system may exhibit are demonstrated by the job market opportunities model of the Public Employment Service Austria (AMS-Arbeitsmarkt-Chancen-Modell, see box).
  • DELTA-NT: This application is used by the ARGE’s occupational psychology service. It is a computer supported psychological assessment tool that is part of the career orientation process (“psychological suitability diagnostic”). This procedure is also called “Computer Assisted Testing” CAT and was developed by the German army. [LINK]
  • VERBIS: The central information system for placement and consultation at the ARGE is linked to many other systems and processes. It contains, for example, features that automatically match the profiles of employment seekers stored at the job agency with job vacancies and training programs.
  • 3A1: “Automated Application Processing of Unemployment Benefit” (Automatisierte Antragsbearbeitung Arbeitslosengeld). This project has been in development since the beginning of 2019 and is supposed to reach “process maturity” in a first step by the middle of 2020. According to the German federal government, the necessary processes and data flows for an “automated preparation of decision-making” were tested beforehand with a prototype. However, the decisions concerned were “circumscribed powers”. The automation of “discretionary administrative practices” was not part of the project.

Some questions remain open: Which software systems are in use to prepare for “discretionary administrative practices” directly or indirectly? How are these software processes monitored and checked for potentially discriminating effects? Every year tens of thousands of decisions are taken to court. What role does the Job Center software play in these contested decisions?



In Austria at the end of 2018, an announcement by the Public Employment Service Austria (Arbeitsmarktservice – AMS) – which is similar to the German employment agency (Arbeitsagentur – ARGE) – stirred controversy. From 2019, a software extension started automatically evaluating the chances of a job placement on the basis of a statistical model. According to media reports, grouping into one of the three categories “high, medium and low” was not intended to have consequences for the time being. However, from 2020 it might have an impact on the granting or denial of funds [LINK]. Criticism was sparked by the fact that the weighing of the different factors was solely based on the analysis of the recent labor market, and thus mirrors existing discrimination within the automated system. This means that specific groups (women, older persons, non-nationals) are generally assessed less positively than others. [LINK] Austrian researchers called the approach of this software solution commissioned by the Public Employment Service a “prime example for discrimination“ [LINK].

Next chapter: Topic: Health & Medicine

Previous chapter: Regulation



The General Equal Treatment Act (Allgemeines Gleichbehandlungsgesetz – AGG) and provisions on automated acts of public administration overarchingly regulate how to deal with automated decision-making.


In Germany the principle of equality, which is derived from the third article of the Basic Law (Grundgesetz), effectively means that (“All persons are equal before the law“). The General Equal Treatment Act (AGG), which is also known as the “Anti-Discrimination Law“, was enacted in 2006.

The AGG defines equal treatment as the prevention and elimination of “discrimination on the grounds of race or ethnic origin, sex, religion, disability, age or sexual orientation”. The Act covers a wide range of societal aspects (including access to employment, goods, services and housing) and equal treatment is mandatory. Thus, the legislator indirectly provides us with a definition of the state’s understanding of participation. AGG requirements are also relevant for ADM systems.


Paragraphs within the Administrative Law Act (Verwaltungsverfahrensgesetz – VwVfG) and the Social Insurance Code (Sozialgesetzbuch – SGB) regulate the employment of automated administrative procedures in Germany.

In general, the authorities are only allowed to use automated procedures if they are legally permitted to do so. As a result, authorities that implement fully automated procedures are required to develop guidelines to ensure compliance with the principle of equal treatment. All automated procedures must be set up in such a way that they recognize when an applicant’s situation deviates from the scenarios provided in the programming. In such a situation, the case must be assessed individually. In addition, citizens must have the opportunity to present their own point of view, for example if they want to apply for special circumstances in their tax declaration.

When fully automated systems are in place, the criteria for decisions that are used by algorithms have to be transparent. Furthermore, whenever ADM systems are used by authorities in more than just a support role their basic principles and decision criteria are subject to the publication requirement.

Risk management procedures in which operations are forwarded for a more detailed review by a human may not discriminate without substantiation and by giving a logical reason.



Among the new regulations that have come into effect recently in the EU and in Germany, the General Data Protection Regulation (GDPR) is one of the most important. The implementation of this EU-wide ordinance on data protection can be found in, among other places, the automated administrative acts described above. Regarding ADM, the GDPR stipulates that citizens have the right to appeal against ADM if three criteria are met:

  • The decision-making procedure that was objected to was fully automated
  • Personal data was used for this decision
  • The person afflicted suffers far-reaching legal or similar other consequences

There is some disagreement as to whether or not the GDPR is sufficient enough to give people adequate protection against disadvantages due to discrimination through ADM. One possible loophole in the regulation relates to credit. Bureaus such as the SCHUFA that evaluate the credit-worthiness of clients do not have to explain their procedures to those concerned, despite the fact that such a scoring must be transparent according to the GDPR and the Federal Data Protection Act (Bundesdatenschutzgesetz – BDSG). As for GDPR Art. 22, BDSG §31, this regulation would only come into effect if the credit bureaus also took decisions on the extension of credit or something similar. However, this procedure is performed by the financial institutions themselves who then in turn do not disclose the details of their decisions because they do not calculate the score.

Labor Law Criteria at the EU Level

At the EU level, a legal structure regarding the principle of labor equality arose in parallel to the development of labor law in Germany. Various directives and court decisions (especially on Article 157 of the Treaty on the Functioning of the European Union – TFEU) form the foundation for the rights of employees in relation to partially or fully automated decisions. These rights include the same principles as the principle of equality, namely the prohibition of discrimination based on criteria such as sex, ethnic affiliation and racial origin.

In addition, Article 21 of the European Charter of Fundamental Rights addresses non-discrimination. However, this article is phrased in such a way that the criteria listed in it are not exclusive. This might be of relevance in the future, especially in regard to the application of Article 21 to the protection objectives connected to ADM.

Human Rights

As a sovereign state and an EU-member, Germany is a signatory to various human rights conventions. Since ADM also touches on human rights (e.g. the right to personal freedom and security, equality before the law, freedom of religious expression etc.), its influence on future – new or revised – laws and regulations must be considered. [LINK]


In the chapter “Education, Stock Trading, Cities & Traffic“ the present paper deals with high-frequency trade (HTF) in stocks and autonomous cars as further industry or sector specific regulations. The regulation of medical devices and health technologies is the subject of the chapter “Health and Medicine”.



In principle, the interests of consumers in Germany are well represented by consumer protection bodies and associations. Apart from credit assessments there are few areas in which the social participation of consumers is impacted directly and decisively by ADM systems.

In the retail business sector, ADM is mainly used in online trading. In most cases consumers generally have alternatives if they do not want to order or book online. However, these options could be far fewer in the future. In online trading, although not exclusively, ADM can be used for customer segmentation through Dynamic Pricing or scoring procedures, which in turn can lead to preferential treatment or discrimination. Both practices are legal and legitimate in principle. Yet, this can lead to systematic discrimination or to the exclusion of specific consumer groups.

One well-known example for Dynamic Pricing is the Uber taxi service which sets its prices depending on the demand and the time of day. In other applications of Dynamic Pricing, the price for an offer fluctuates in accordance with the end device consumers use for their request. Insurers give discounts in relation to telematics-based car insurance. In these cases, the tariff level is determined by the driving style which is established by the telematics system. In the customer relationship management, ADM is used to calculate the so-called Customer Lifetime Value (CLV): How profitable are customers, who should get preferential treatment and who can be placed at the end of the queue in the phone waiting loop when necessary.

While Dynamic Pricing is not yet common in Germany, scoring procedures have been in use in the consumer sector for a while. Scoring in this context means a categorization of persons according to a number of selected criteria. The combination of specific values of these criteria results in a score that can influence, for example, which price customers pay for a product or whether a bank will extend credit to them. In Germany, the most well-known example for scoring is the credit assessment provided by the private company Schufa (see box below: OpenSCHUFA).

Scoring helps companies decide which people it wants to establish a customer relationship with. However, the decision privilege in the freedom of contract is only limited in essential areas such as tenancy and labor law because they affect the principle of equal treatment. Whether or not the rejection of customers that is justified by the freedom of contract or the price discrimination (via scoring or Dynamic Pricing) also runs contrary to the principle of equal treatment in other areas is something which is still subject to debate.

Under current law, consumers have the right to be informed when they are subject to scoring. Yet, this legal right is insufficiently specified by the law according to experts such as the Consumer Affairs Council (Sachverständigenrat für Verbraucherfragen – SVRV). The existing means available to enforce citizens’ rights that are often criticized as inadequate. [LINK]



In the spring of 2018, the Open Knowledge Foundation Germany and AlgorithmWatch started the project OpenSCHUFA. The goal was to examine the scoring procedure of Germany’s best-known credit bureau Schufa for potential discrimination. The bureau holds the data of about 70 million citizens (out of Germany’s 83 million population). The company provides information to banks which may result in customers being denied credit. Following a successful crowdfunding drive, more than 3,000 people donated their Schufa reports using an online portal specifically developed for this campaign.

In the autumn of 2018, Spiegel Online and Bayerischer Rundfunk published an analysis of the donated data. The editors emphasized that the data available to them is by no means representative. Nevertheless, they were able to identify various anomalies in the data. For instance, it was striking that a number of people were rated rather negatively even though SCHUFA had no negative information about them, e.g. on debt defaults. There also appears to be noticeable differences between different versions of the SCHUFA scores.

Next chapter: Topic: Labor

Previous chapter: Stakeholders



Many stakeholders in the economic, scientific, political and civil society sectors in
Germany have voiced their interest in discourse on ADM and its importance to participation. The “Atlas of Automation“ online database introduces about 30 of these actors.

It is difficult to clearly define the German stakeholders that are relevant to participation. This is partly due to the fact that a greater part of the debate on ADM takes place under the wider term “AI”. Particularly in the US, the “ethics of artificial intelligence“, which covers a lot more than just ADM, draws much attention. Leaders in this debate are not only from large tech companies, but also from scientific institutions and non-profit organizations that are often endowed with large sums of grant money from foundations. Within this mélange the strategic interests that guide individual actors are not always easily identified. A study published in 2018 by the Reuters Institute for the Study of Journalism at Oxford University in the UK [LINK] is worth your attention: The authors found that reporting on ADM is mainly shaped by the industry and their statements are rarely scrutinized. Even though the study only examined the British media sector, there is little indication that the situation is any different in Germany.


Industry associations like the German Association for IT, Telecommunications and New Media (Bitkom), the German Association for the Digital Economy (BVDW) and corporations such as Deutsche Telekom AG, Google and Facebook all have their own viewpoint and regularly lobby on the subject of automation and participation – and they often use the keyword “Artificial Intelligence”. Further actors are the TÜV companies (on the issue of certification) and the Initiative D21 association. This initiative is a non-profit network that deals with ethical questions concerning automation. Members consist mainly of companies that operate in the digital sphere.


A great number of organizations which are active at the interface between politics and industry are often financed by both sides. Among them are the platforms “Learning Systems” (on the issue of AI applications) and “Industry 4.0” (on intelligent interlinking and industrial automation) which are operated by the National Academy of Science and Engineering (Acatech) and sponsored by the Federal Ministry of Education and Research and the Federal Ministry for Economic Affairs and Energy. In addition, some Fraunhofer Institutes, which are also state funded, provide commissioned research for the industry (on issues such as health and camera surveillance).


In the “AI strategy” published by the German federal government at the end of 2018 there are many sections that touch more or less directly on the subject of ADM that is relevant to participation. Accordingly, the Federal Ministry of Labour and Social Affairs (“Labour 4.0“), the Federal Ministry for Economic Affairs and Energy as well as the Federal Ministry of Justice and Consumer Protection, are all obliged to follow the AI strategy, and can all be considered actors in the field of ADM relevant to participation. Noteworthy political bodies are the Advisory Council for Consumer Affairs (Sachverständigenrat für Verbraucherfragen – SVRV – with its main topic “Consumers in the Digital World”), the Data Ethics Commission of the German government (which in the past has also commented on issues of autonomous driving), as well as the Study Commission “Artificial Intelligence – Social Re-sponsibility and Economic Potential”. Last but not least, Germany’s political parties related to various degrees on the discourse around ADM and participation through their policies on digitalization, parliamentary initiatives, and through their respective party-affiliated foundations.


All aspects of ADM and participation are mirrored in a great variety of research institutes and projects run by scientific organizations and foundations. The Weizenbaum Institute for the Networked Society, founded in 2017, distinguishes itself with its interdisciplinary orientation in the field of ADM and participation. Another exemplary institution is the Hans-Bredow-Institute in Hamburg with its communication research on platforms (Intermediaries) and the trade union affiliated Hans-Böckler-Foundation which manages the research association “Digitalization,
Co-Determination, Good Labour” in which AlgorithmWatch is also taking part.


The Bertelsmann Foundation can certainly be seen as a major player in the civil society sector: It started the “Ethics of Algorithms” project in 2017 to independently work on topics and commission third parties with studies and papers on legal issues or key topics. AlgorithmWatch, the organization behind the current Atlas, receives structural funding through this project of the Bertelsmann Foundation while remaining independent in its work. Apart from the Bertelsmann Foundation, NGOs and other organizations from the sector of digital policy such as the Society for Computer Science (Gesellschaft für Informatik) or the Federation of German Consumer Organizations (Verbraucherzentrale Bundesverband – vzbv) also address ADM related to participation.

Among the trade unions, the United Services Trade Union (Vereinte Dienstleistungsgewerkschaft – ver.di) is the one which most intensively deals with digitalization and auto-mation in the field of labor, and thus also with ADM related to participation.



In the spring of 2019, the iRights Lab and the Bertelsmann Stiftung’s “Ethics of Algorithms” project presented a list of criteria that must be observed, to enable and facilitate the socially beneficial design and verification of algorithmic systems. The criteria, which were published under the title “Algo.Rules”, serve as a basis for ethical considerations and for the implementation and enforcement of legal frameworks. There is a plan to develop them in the future and the cooperation of interested parties is welcome.

The programmatic headings under which Algo.Rules groups the criteria are as follows: 1. Strengthen competency 2. Define responsibilities 3. Document goals and anticipated impact 4. Guarantee security 5. Provide labelling. 6. Ensure intelligibility 7. Safeguard manageability 8. Monitor impact. 9. Establish complaint mechanisms.

Next chapter: Regulation

Previous chapter: Way of working


Way of working

For the “Atlas of Automation” we researched and weighed the topics specifically in regard to a focus on the issue of participation. The Atlas project is accompanied by a purpose-built database which is open to the public.

The current “Atlas of Automation” focuses on participation and the topic of automated decision-making (ADM) in Germany. To focus the Atlas, we used the following approach to select the uses of ADM that we wanted to concentrate on:

Firstly, we started by using the most pertinent definitions of the term participation. From there we defined certain groups of people who might be excluded from specific aspects of society due to age, gender, origin or status (e.g. seeking employment). Secondly, we identified situations where these groups might come into contact with ADM (for example when dealing with public authorities). Based on these two steps, we drew up a list of specific ADM products and technologies currently in use at the identified interfaces which we then categorized by keywords. In the process, we referred to existing compilations of ADM-technologies (such as those on and those in the EU ADM Report This list formed the foundation of the database on ADM-technologies which we created in the course of producing the Atlas of Automation.

Within the framework of our internal process, each entry received a point rating in the database which represents the relevance of the respective technology for the aspect of participation in a quantified form. Into the generation of the point rating we included answers to questions such as: “Does an ADM system operate passively (through recommendations) or actively (through the direct implementation of a decision)?” Furthermore, points were given for the critical impact a technology has on the environment, the common good, on self-determination, the physical integrity or aspects of social participation. In addition, we took into consideration which spectrum of actions are available to those operating an ADM system or affected by it (e.g. the option to appeal against an automated decision). We also included whether an ADM system is run by the government or by private actors. In addition to the point rating we documented whether the product or the technology (e.g. face recognition) is already in use or only being tested.

In the course of the development of the database and the point system, the topics that form the individual chapters of this report emerged. During the preliminary work we also realized which issues of regulation needed to be considered in the scope of the Atlas project in regard to ADM and participation, and which actors needed to be highlighted. To us, “actors” in this context are not users or customers of ADM systems, but rather (civil society or commercial) interest groups, NGOs, foundations, research networks, and individual companies as well as government agencies and boards. The decisive criterion for the selection of relevant actors was the consideration of the extent to which they actively shape the discourse on ADM and participation, for example through studies, policy papers or events.

The project database we created during the research phase is an essential part of the Atlas project and we were determined to make it open to the public. The entries in the database, as well as the text in the Atlas, are based primarily on research in literature and on the Internet. In some cases we also consulted with the producers or operators of software systems. Regarding regulatory questions, we asked experts for their evaluation. The Atlas database will be extended and continuously updated.

Next chapter: Stakeholders

Previous chapter: Recommendations



These recommendations are based on the research and analyses which was conducted during the creation of the Atlas of Automation. They address systems of auto­mated decision-making (ADM) which are significant in relation to participation. We hope that our recommendations for action will spur discussion and inspire politicians and decision makers in authorities, companies and civil society organizations.


In the development and application of systems for ADM, the guiding principle should be to do no harm. (Primum non nocere). This principle was established with the Hippocratic Oath, which is still an essential reference for ethical standards in medicine. Similarly, ADM systems should be designed and implemented with specific ethical standards in mind. For example, traffic routes recommended by GPS-based navigation devices should not only factor in criteria such as the speed and length of a journey, but also whether or not the recommended route, and potential increase in traffic, will affect residential areas or nature reserves.

Any impact assessment of technology, or assessment of the purpose behind the use of an ADM system – including the characteristics of the producer and the user (whether they are a public body or a private business) – should be considered. In addition, attention needs to be paid to the quality and the origin of the data used and the anticipated effects beyond the intended use.


At the moment, so-called Artificial Intelligence (AI) dominates public debate. Extreme scenarios such as “super intelligence” and “singularity” are frequently mooted. However, such nightmare visions shield a highly sensitive aspect of “artificial intelligence”, one that is already very present: Decisions that impact social participation are increasingly delegated to software. We therefore consider it more helpful to use the term “algorithm-based decision making” (ADM) instead of the loaded term “AI”. In doing so, we want to highlight the issue of responsibility which is inherent in many of these so called “AI” applications. The responsibility for decisions which are taken or deliberated with the help of software systems still lies with the humans who commission, develop and implement ADM systems. Indeed, when it comes to predictive analytics, the focus should very much be on ADM. This is especially important when it comes to predicting human behaviour, e.g. with respect to credit worthiness or the likelihood of committing crimes. In this context and beyond, ADM systems touch key values in society such as the rule of law or fairness. Therefore, people have to be able to control them democratically through a combination of regulatory instruments, supervisory mechanisms and technologies.


Citizens should be empowered to more competently assess the results and the potential of automated decisions. Furthermore, Germany’s federal government should now let actions follow the promises made in its AI strategy. In the strategy declaration, it says: “The government needs to enable scientists and civil society to provide independent and skills-based contributions to this important public debate. ” [LINK] (pdf, p. 43). One leverage point for the “empowerment” of citizens is the education sector. It is particularly important to develop materials and programmes for schools, job training and further education. Finland, where the online course Elements of Artificial Intelligence was developed in a public-private partnership, could serve as a role model. [LINK] This free course, which is available in Finnish and English, introduces the subject of the societal implications of AI such as algorithmic distortions and options for de-anonymizing data. So far, almost 100,000 Finns (out of an overall population of 5.5 million) have enrolled in this course.


Journalists, editors and publishers should see ADM as a subject for research and reporting. Competences should be built and extended in order to enable responsible reporting on algorithms (“Algorithmic Accountability Reporting”). The skills that already exist in the field of data journalism might prove useful in reaching this goal. In the face of a growing demand for journalism that serves the common good, we advise foundations to fund Algorithmic Accountability Reporting more intensely.


Our research for the Atlas of Automation has made us acutely aware of a universe of different software systems in all kinds of branches of administration and other service sectors that are relevant to participation. So far, a register of such systems that allows for an evaluation in regard to the degree of automation and its effect on participation, and on society, is still missing. In order to ensure democratic discourse and control, it would be desirable for municipalities, federal states and the national government in Germany to feel obliged, in the sense of Open Government, to create such a register. The experience of the city of New York might be helpful in this respect: At the end of 2017, the city council decided on an ordinance on “Algorithmic Accountability“. In May 2018 an “Automated Decision Task Force” was established in the municipal administration which, as a first step, set out to document the current state of automated decisions [LINK].

Such a survey of the current state of affairs in Germany would strengthen the administration as well because it could keep an overview on its ability to act. On the one hand, employees should be trained to see more clearly, to which extent software (subtly) prepares decisions or already effectively takes them. If applicable, existing software based processes should be reviewed to detect bias and discrimination. On the other hand, staff should also be able to voice recommendations and to develop procedures for implementing ADM where it is appropriate. Furthermore, mechanisms for the evaluation of the respective software systems, as well as methods to conceptualize ADM, need to be established within the administration.


Various sides have brought up the demand for an “Algorithm TÜV” (institute for testing and certifying software). We are reserved in our support for this demand because a single institution could hardly be a match for the diverse regulatory needs of each and every sector. Here, too, documentation of existing approaches to regulation would be desirable. In various sectors, control institutions are already in place and their area of responsibility might only need to be extended or modified.

In particular, the General Directive on Data Protection (GDPR) already contains regulations on automated decisions. Whether they are sufficiently far-reaching or have regulatory gaps would need to be clarified. In specific cases, such as Predictive Policing, the GDPR does not apply when geographic areas as a whole, instead of individual citizens, are affected by automated decisions. There is a risk that the effect of ADM could declare whole neighborhoods to be supposed crime hotspots.

More generally, the demand for accountability for automated decisions needs to be taken into account. We need to know how ADM processes work, what data is used and for what purpose. Transparency on its own, without an explanation, is insufficient when dealing with complex software systems and large amounts of data. In addition, it needs to be clarified as to when, and how often, ADM systems should be reviewed. Internal and external factors that affect the ADM system could change during their development, implementation and regular use. In this case, it is of value to take a look at the suggestion for a “Social Impact Statement” by the association “Fairness, Accountability, and Transparency in Machine Learning (FAT/ML)” [LINK].


There are already numerous regulations that allow the use of ADM systems and control that use, for example in financial markets and in medicine. However, as things stand, it is all too possible to form the impression that this supervision is performed inadequately. Many supervisory authorities are not qualified or equipped to adequately review complex ADM systems.

There is room for improvment, however it is a great challenge to find a quick fix as it is difficult to find the right personnel. Along with the problem of finding qualified staff, there also appears to be a lack of willpower on behalf of the authorities. This has to change, so that they can exercise their supervisory function aggressively. This is especially important as it relates to the opportunities available to citizens to participate. Therefore, it is necessary to proactively identify and verify ­
potentially problematic ADM systems, such as credit scoring.


As we show in the Atlas, ADM that is relevant to participation is not always in public hands. On the one hand, private companies provide software for public institutions. On the other hand, they independently offer and operate services that contain at least some ADM elements, e.g. in health care, credit approval or the provision of power infrastructure. Therefore, private companies should also be subject to quality control processes if their products can have collective effects. In addition to staff training, self-regulation and certification programs, audit procedures defined by the state for the kind of accountability outlined above could be considered. Furthermore participation in connection to the automation of digital services should also be taken into account. This relationship may shift the boundaries between the right to private autonomy in economic activities on the one hand, and the demand for access to public goods as it is guaranteed in the “anti-discrimination act” on the other. This is particularly relevant when it comes to consumer protection issues. However, it is also important in regard to new public platforms such as Facebook.


Apart from the software, automated decisions need hardware and Internet infrastructure and these operations consume energy. As long as the use of ADM does not lead to savings in other areas, this additional use of resources has a negative impact on the ecosystem. This is relevant to participation as it impairs the foundation of human life. Studies show that mobile and Internet infrastructure (radio masts, server farms and cables) are currently responsible for about four percent of global CO2 emissions every year. The continuous growth in the number of devices could more than triple this figure [LINK].

In the face of this threat to the environment, the expected gains from the implementation or extension of ADM systems need to be considered, especially to see if the ecological effects justify the use of ADM. For example, the fact that smart city concepts require large numbers of digitally connected devices, which use a lot of energy and resources, and are embedded into an energy-intensive Internet and server infrastructure, needs to be considered.

Next chapter: Way of working

Previous chapter: Introduction