Atlas der Automatisierung

Automatisierte Entscheidungen
und Teilhabe in Deutschland

Der Atlas der Automatisierung wird aktuell nicht mehr aktualisiert.
Die Daten sind daher nicht mehr auf dem neuesten Stand.

       

Recommendations

These recommendations are based on the research and analyses which was conducted during the creation of the Atlas of Automation. They address systems of auto­mated decision-making (ADM) which are significant in relation to participation. We hope that our recommendations for action will spur discussion and inspire politicians and decision makers in authorities, companies and civil society organizations.

ASSESS IMPACT

In the development and application of systems for ADM, the guiding principle should be to do no harm. (Primum non nocere). This principle was established with the Hippocratic Oath, which is still an essential reference for ethical standards in medicine. Similarly, ADM systems should be designed and implemented with specific ethical standards in mind. For example, traffic routes recommended by GPS-based navigation devices should not only factor in criteria such as the speed and length of a journey, but also whether or not the recommended route, and potential increase in traffic, will affect residential areas or nature reserves.

Any impact assessment of technology, or assessment of the purpose behind the use of an ADM system – including the characteristics of the producer and the user (whether they are a public body or a private business) – should be considered. In addition, attention needs to be paid to the quality and the origin of the data used and the anticipated effects beyond the intended use.

SPEAK OF ADM INSTEAD OF AI

At the moment, so-called Artificial Intelligence (AI) dominates public debate. Extreme scenarios such as “super intelligence” and “singularity” are frequently mooted. However, such nightmare visions shield a highly sensitive aspect of “artificial intelligence”, one that is already very present: Decisions that impact social participation are increasingly delegated to software. We therefore consider it more helpful to use the term “algorithm-based decision making” (ADM) instead of the loaded term “AI”. In doing so, we want to highlight the issue of responsibility which is inherent in many of these so called “AI” applications. The responsibility for decisions which are taken or deliberated with the help of software systems still lies with the humans who commission, develop and implement ADM systems. Indeed, when it comes to predictive analytics, the focus should very much be on ADM. This is especially important when it comes to predicting human behaviour, e.g. with respect to credit worthiness or the likelihood of committing crimes. In this context and beyond, ADM systems touch key values in society such as the rule of law or fairness. Therefore, people have to be able to control them democratically through a combination of regulatory instruments, supervisory mechanisms and technologies.

EMPOWER CITIZENS AND CIVIL SOCIETY

Citizens should be empowered to more competently assess the results and the potential of automated decisions. Furthermore, Germany’s federal government should now let actions follow the promises made in its AI strategy. In the strategy declaration, it says: “The government needs to enable scientists and civil society to provide independent and skills-based contributions to this important public debate. ” [LINK] (pdf, p. 43). One leverage point for the “empowerment” of citizens is the education sector. It is particularly important to develop materials and programmes for schools, job training and further education. Finland, where the online course Elements of Artificial Intelligence was developed in a public-private partnership, could serve as a role model. [LINK] This free course, which is available in Finnish and English, introduces the subject of the societal implications of AI such as algorithmic distortions and options for de-anonymizing data. So far, almost 100,000 Finns (out of an overall population of 5.5 million) have enrolled in this course.

EXTEND REPORTING

Journalists, editors and publishers should see ADM as a subject for research and reporting. Competences should be built and extended in order to enable responsible reporting on algorithms (“Algorithmic Accountability Reporting”). The skills that already exist in the field of data journalism might prove useful in reaching this goal. In the face of a growing demand for journalism that serves the common good, we advise foundations to fund Algorithmic Accountability Reporting more intensely.

STRENGTHEN ADMINISTRATION

Our research for the Atlas of Automation has made us acutely aware of a universe of different software systems in all kinds of branches of administration and other service sectors that are relevant to participation. So far, a register of such systems that allows for an evaluation in regard to the degree of automation and its effect on participation, and on society, is still missing. In order to ensure democratic discourse and control, it would be desirable for municipalities, federal states and the national government in Germany to feel obliged, in the sense of Open Government, to create such a register. The experience of the city of New York might be helpful in this respect: At the end of 2017, the city council decided on an ordinance on “Algorithmic Accountability“. In May 2018 an “Automated Decision Task Force” was established in the municipal administration which, as a first step, set out to document the current state of automated decisions [LINK].

Such a survey of the current state of affairs in Germany would strengthen the administration as well because it could keep an overview on its ability to act. On the one hand, employees should be trained to see more clearly, to which extent software (subtly) prepares decisions or already effectively takes them. If applicable, existing software based processes should be reviewed to detect bias and discrimination. On the other hand, staff should also be able to voice recommendations and to develop procedures for implementing ADM where it is appropriate. Furthermore, mechanisms for the evaluation of the respective software systems, as well as methods to conceptualize ADM, need to be established within the administration.

REGULATE INTELLIGIBILITY

Various sides have brought up the demand for an “Algorithm TÜV” (institute for testing and certifying software). We are reserved in our support for this demand because a single institution could hardly be a match for the diverse regulatory needs of each and every sector. Here, too, documentation of existing approaches to regulation would be desirable. In various sectors, control institutions are already in place and their area of responsibility might only need to be extended or modified.

In particular, the General Directive on Data Protection (GDPR) already contains regulations on automated decisions. Whether they are sufficiently far-reaching or have regulatory gaps would need to be clarified. In specific cases, such as Predictive Policing, the GDPR does not apply when geographic areas as a whole, instead of individual citizens, are affected by automated decisions. There is a risk that the effect of ADM could declare whole neighborhoods to be supposed crime hotspots.

More generally, the demand for accountability for automated decisions needs to be taken into account. We need to know how ADM processes work, what data is used and for what purpose. Transparency on its own, without an explanation, is insufficient when dealing with complex software systems and large amounts of data. In addition, it needs to be clarified as to when, and how often, ADM systems should be reviewed. Internal and external factors that affect the ADM system could change during their development, implementation and regular use. In this case, it is of value to take a look at the suggestion for a “Social Impact Statement” by the association “Fairness, Accountability, and Transparency in Machine Learning (FAT/ML)” [LINK].

ENSURE ENFORCEABLE SUPERVISION

There are already numerous regulations that allow the use of ADM systems and control that use, for example in financial markets and in medicine. However, as things stand, it is all too possible to form the impression that this supervision is performed inadequately. Many supervisory authorities are not qualified or equipped to adequately review complex ADM systems.

There is room for improvment, however it is a great challenge to find a quick fix as it is difficult to find the right personnel. Along with the problem of finding qualified staff, there also appears to be a lack of willpower on behalf of the authorities. This has to change, so that they can exercise their supervisory function aggressively. This is especially important as it relates to the opportunities available to citizens to participate. Therefore, it is necessary to proactively identify and verify ­
potentially problematic ADM systems, such as credit scoring.

OBLIGATE PRIVATE BUSINESS

As we show in the Atlas, ADM that is relevant to participation is not always in public hands. On the one hand, private companies provide software for public institutions. On the other hand, they independently offer and operate services that contain at least some ADM elements, e.g. in health care, credit approval or the provision of power infrastructure. Therefore, private companies should also be subject to quality control processes if their products can have collective effects. In addition to staff training, self-regulation and certification programs, audit procedures defined by the state for the kind of accountability outlined above could be considered. Furthermore participation in connection to the automation of digital services should also be taken into account. This relationship may shift the boundaries between the right to private autonomy in economic activities on the one hand, and the demand for access to public goods as it is guaranteed in the “anti-discrimination act” on the other. This is particularly relevant when it comes to consumer protection issues. However, it is also important in regard to new public platforms such as Facebook.

CONSIDER ECOLOGICAL ASSESSMENT

Apart from the software, automated decisions need hardware and Internet infrastructure and these operations consume energy. As long as the use of ADM does not lead to savings in other areas, this additional use of resources has a negative impact on the ecosystem. This is relevant to participation as it impairs the foundation of human life. Studies show that mobile and Internet infrastructure (radio masts, server farms and cables) are currently responsible for about four percent of global CO2 emissions every year. The continuous growth in the number of devices could more than triple this figure [LINK].

In the face of this threat to the environment, the expected gains from the implementation or extension of ADM systems need to be considered, especially to see if the ecological effects justify the use of ADM. For example, the fact that smart city concepts require large numbers of digitally connected devices, which use a lot of energy and resources, and are embedded into an energy-intensive Internet and server infrastructure, needs to be considered.

Next chapter: Way of working

Previous chapter: Introduction

Contents