Is AI an existential threat to politics?

Existential threat or promise? : What goes wrong when discussing AI

Although artificial intelligence (AI) is a technology that billions of people interact with on a daily basis, it is hard to misunderstand. So far, society has perceived them in one of two ways. It's either an existential threat just a few lines of code away from world domination. That view is shaped by science fiction like in the movie "Terminator" and by people like Elon Musk tweeting that AI is a bigger threat to humanity than nuclear weapons. (She is not.)

On the other hand, AI is perceived as salvation. Optimistic futurists and politicians envision AI improving our brains and extending our lives. When the Trump administration released its AI decree in February, it predicted that AI would "fuel the growth of the US economy, improve our economic and national security, and improve our quality of life." Both assumptions are simplistic. AI isn't good or bad - it's just the next step in computer evolution.

Tagesspiegel Background Digitization & AI

Digital politics, regulation, artificial intelligence: The briefing on digitization & AI. For decision-makers and experts from business, politics, associations, science and NGOs.

Free test now!

We can use them to do amazing things: automated decisions, facial and voice analysis, driving cars. Nonetheless, it is only a tool designed, developed and used by humans. Another flaw in these assumptions? They assume that humans are powerless against AI. Musk and others suggest, “AI will either kill us or pamper us. Well Let's hope for the latter. ”This is not a helpful narrative.

Man is the problem

We have serious AI problems. But: problems with prejudice and discrimination. Problems with democracy and public debate. Problems with the concentration of corporate power. These problems arise from the way we develop and use AI technologies. We need more people - consumers, lawyers, journalists - to recognize that and to end these clichés. If AI is simplified or misunderstood, we cannot improve or fix things once they go wrong.

For example, today's machine learning (a type of AI) has the potential to inherit human prejudice. That's because human bias can creep into the datasets that influence an AI's worldview.

Last year we learned that Amazon was planning a hiring algorithm that would favor men over women. It wasn't because the Code chose to be misogynistic. The reason was that the code learned the historical, discriminatory hiring practices of people. Tech companies are homogeneous: lots of young, white men. When an AI is trained on the resumes of existing employees, it is looking for more of them.

Another example: AI can amplify dangerous misinformation. Platforms like Facebook and YouTube are powered by AI recommendation engines that decide what content billions of people see. This AI is supposed to keep people on the platform so they can see more ads or buy more. AI learns pretty quickly that outrage and noise - like conspiracy theories and hate speech - keep people scrolling.

Much power in few hands

It is clear that human decisions about what data to use for exercise, or what rules machines follow, can harm individuals and society. As AI becomes a ubiquitous part of our digital world, more things will go wrong. The task will be to design, use and control this technology responsibly, taking these risks into account.

That leaves one final problem to be discussed: We rely on only nine companies around the world to "steer" the direction of AI. AI requires considerable computing power and training data. Right now there are fewer than ten companies - all for profit - that have both parts.

The author Amy Webb calls them "The Big Nine": Amazon, Google, Facebook, Tencent, Baidu, Alibaba, Microsoft, IBM and Apple. In addition, these companies only represent two countries: America and China.

These nine companies are shaping the future of AI; how it is trained, how it is used, when it is used. Due to the oversized role of AI in our lives, these companies are also shaping the future of the digital society.

Shouldn't democratic governments, civil society, educational institutions and smaller start-ups in other regions help shape the future of the digital society? The answer is, of course, yes. And they can start by shedding the AI ​​clichés together and focusing on how humans can steer things in a better direction.

Trust the AI

What would that look like? It could take many forms: teaching engineering and design students to actively counter prejudice when creating new AI systems. The code, data and models should be made available as "open source" to help smaller innovators.

More should be invested in educating young people about how AI works. Companies should be urged to be accountable and transparent on AI projects, especially when there are signs of disadvantage. There is no such thing as a silver bullet - it is about technology developers being mindful of risk and citizens and governments demanding things like privacy and security when using AI.

Europe is particularly well placed to build this momentum. When implementing the General Data Protection Regulation, the EU has already set a global standard for privacy and user rights. She has also professed to keep the power of the big tech companies in check, including punishing some of them for anti-competitive behavior. The kind of high standing wheel could steer our AI future in a better direction.

The latest report by the EU's high-level expert group on the "Trustworthy AI" guideline points in the right direction - or at least many of the right issues. The report suggests guidelines to ensure things like human action, privacy, transparency and diversity. However, he also lets things rest and describes only a loose ethical framework that should guide the AI-developing companies and researchers.

Human rights laws are fundamental

But much more is needed. As the Internet citizens' movement Access Now wrote in response to the report, “Trustworthy AI has three basic components: It should be lawful, ethical and robust. While the guidelines for Trustworthy AI discuss at length ethics, they overlook at least one key question related to robustness and completely ignore the legal component ". The existing data protection and human rights laws provide a starting point for these questions. We should move on from there.

That brings us back to the question of clichés. Overcoming simple narratives about "Terminator" or an oh-so-bright future is crucial if we want AI-driven computing to really be used for the benefit of consumers. Instead, we need to address the thorny issues of how to develop technology that treats people and their data well.

More importantly, we have to face the fact that we are leaving too much power in too few hands, both in AI development and in technology markets in general. These are not abstract threats from science fiction - they are real and tangible issues that we can deal with today.

Mark Surman is the managing director of Mozilla.

To home page