The whole system is manipulated

KI: Beware of machine manipulation

Unintentional manipulation often occurs as a result of incorrect programming, such as unreflected access to existing data. For example, an Amazon algorithm compared applicants with the existing workforce and invited more men than women to interview, as Amazon already has more male than female employees. In this way, algorithms ensure that existing social structures are updated. The mathematician Cathy O'Neil describes a similar example in her book "Attack of the Algorithms" with PredPol, a program that predicts the probability of criminal offenses in certain regions of Pennsylvania (USA): Based on statistics of previous arrests, the algorithm sent police patrols to the respective areas Areas. As more patrols were patrolling there, the number of arrests also rose - which in turn flowed into the system and increased the trend.

Without appropriate programming, algorithms only take into account correlations, but not the causalities behind the data. If an AI only records the correlation between the characteristics “female” and “low job level”, it looks for no or fewer female than male candidates for a management position. In this way, AI actually confirms prejudices and exacerbates social disadvantage, precisely because it (co) decides key aspects of social participation - from lending to promotions to access to free medication or the chance of early release from prison.

AI not only influences concrete events and decisions, it also changes human perception, our language and thus our thinking. Algorithms scan search queries and suggest words via autocompletion, which users usually adopt without questioning. We speak to AI-driven voice assistants in command form - and what cannot be technically verbalized also tends to become unthinkable. Here, too, manipulation takes place: The AI ​​of the tech companies formats our experience.

From manipulative to moral AI

We want AI to make morally correct and “humane” decisions - and we are appalled that its decisions can be racist, sexist or otherwise discriminatory. But the basis on which AI makes decisions is always created by humans. Algorithms can only be as biased and discriminatory as the people who program them and the training data they learn from. Or as Cathy O'Neil puts it: "Algorithms are opinions packed in mathematics."

In order to counteract manipulative AI, the focus must be on the data and programs on the basis of which AI acts. Concrete starting points for this are:

  1. Transparency: Implicit assumptions and prejudices must be made visible. Transparency in training data is a step in this direction.
  2. Responsibility and regulation: State control is desired by society, for example in the form of an “KI TÜV”. A voluntary commitment by industry or a kind of Hippocratic oath for developers, as suggested by O'Neil, are not enough here.
  3. Causality instead of correlation: Discrimination can be avoided through causal inference and context awareness.
  4. Diversity: The move away from the white, male “New Digital Aristocracy” enables more team diversity.
  5. Knowledge transfer: In order for AI to be accessible and usable for everyone, a basic digital and technical understanding must be conveyed in school.

Private and state organizations such as AlgorithmWatch or the AI ​​Now Institute are already grappling with the social consequences of AI, doing educational work and trying to find out how algorithms can be sensitized to historical data and contexts. Companies are also increasingly developing their own programs to identify AI distortions. For example, Facebook wants to expose unfair algorithms with the “Fairness Flow” program.

The manipulative potential of AI and its consequences are a serious threat to society as a whole - fatally especially for those who already suffer from discrimination. Still, there is no place for an unreflective panic of an almighty manipulative AI. AI makes decisions based on the data and programs we give it. It is up to us to ensure that these decisions are made in accordance with our socially established values.