Could quantum computers cause a technological singularity?

The super-intelligent ghost

Super intelligence

There is a ghost in the world - the ghost of super intelligence. Such an ultra-intelligent machine surpasses the intellectual abilities of every person, said its “inventor” in 1965, the mathematician and cryptologist Irving John “Jack” Good.

However, the superintelligence catastrophists' bible is Oxford philosopher Nick Bostrom's book "Superintelligence: Scenarios of a Coming Revolution." Tesla founder Elon Musk also draws his “artificial intelligence” nightmares from this: Musk is not an AI expert, but describes artificial intelligence as a “fundamental risk to the continued existence of human civilization”.

Unfortunately, this super-intelligent ghost also spreads fear of today's “artificial intelligence” programs that have nothing to do with super-intelligence. However, these programs could help us defeat diseases, prevent wars, stop global warming and gain unimaginable knowledge about nature, humans and their brains. In addition, the scaremongering with super intelligence obscures the view of the real problems with real existing artificial intelligence. Do we have to be afraid of super intelligence? Where are the real dangers of artificial intelligence? I'm trying to explore that in this blog text.

First, let's think about whether researchers could soon develop artificial intelligence that would be smarter than anyone. According to the superintelligence scenario, a superintelligence developed by us would suddenly start improving itself and becoming exponentially smarter - this is called technological singularity, the big bang of superintelligence.

Can you imagine that? In an hour you are twice as clever as you are now, in two hours four times, in three hours eight times, in four hours sixteen times - your intelligence explodes! At noon you plan something for the evening, and in the evening you have to think: What did I think when I was planning, I am a complete idiot? Wouldn't a super-intelligent machine be more of a case for the psychiatrist?

Nick Bostrom writes: “It is entirely possible to have a superintelligence whose sole aim is to make something completely arbitrary, such as paper clips. And which would with all might oppose any attempt to change this goal. This intelligence would first turn the entire earth into paper clip factories and later even parts of the universe. "

Turning the entire earth into paper clip factories is not super intelligent, that's super stupid. That would be done by a computer program that was only optimized for the task of making paper clips. You simply switch off such a program if it does nonsense instead of ranting about super intelligence.

The opinion of the AI ​​experts

According to a survey, 92.5 leading artificial intelligence experts see superintelligence beyond a predictable horizon. One of the experts wrote:

“Nick Bostrom is a professional terrorizer. His institute has the task of finding existential threats to humanity. He sees them everywhere. I am tempted to call him the “Donald Trump” of AI. "

In an interview with Russ Roberts, one of the leading robotics experts Rodney Brooks (MIT) said: “He (Nick Bostrom) is not much more experienced in AI than in the search for extraterrestrial life. But he does. This is his gimmick. As for the rest of them - including Nick - none of those people who worry about it have ever worked on AI. You were out. The same applies to Max Tegmark. "Max Tegmark has written another AI bestseller:" Life 3.0: Being human in the age of artificial intelligence. " In his contribution to Robohub, “Artificial Intelligence is a Tool, Not a Threat”, Brooks delves into the subject.

The AI ​​experts see it differently in many media: “Will robots soon rule the world?” Asked the Weekend magazine in Vienna. Underneath it said something about vacuum cleaner robots, and I was delighted that vacuum cleaner robots will soon take over in Austria. A couple of vacuum cleaners could then be sent across the border to Hungary.

But the author went on to say: "We have long gone beyond the vacuum cleaner robot - have humans already gone too far and artificial intelligence will soon take control?" Is he right? Have we gone too far with robot vacuum cleaners?

Hand on heart: Wouldn't you prefer a vacuum cleaner robot as President to Donald Trump anyway? Artificial intelligence does much less damage than natural stupidity: a machine would never say, "It's freezing and snowing in New York - we need global warming." A machine would never say that women should be treated like shit.

A machine only does things like that if a person has implanted them in it. When the human has given the machine a data set laden with prejudices to learn. For example, when the chatbot Tay (an artificial neural network, i.e. an AI program) from Microsoft was supposed to learn from comments “young people” how “young people” speak today - and that on Twitter!

The chatbot had to be unplugged when she (Tay is one of them) then uttered sayings from the Twitter trolls like: "Hitler would do a better job than the monkeys we have now." It was shown that artificial intelligence is racist shocked. But what should the poor chatbot do? You wanted to teach Tay to speak like idiots on Twitter, and then you wondered why she spoke like idiots on Twitter.

Super intelligence on the abacus?

All basic algorithms for artificial neural networks were developed around 30 years ago. These enable today's hype of artificial intelligence. Backpropagation, "the basic algorithm" of artificial neural networks, was introduced in 1986 by David Rumelhart, Geoffrey Hinton and Ronald J. Williams. Why then couldn't the AI ​​image recognition programs tell a monkey from a hippopotamus a few years ago if you had good algorithms for it?

Because at that time our computers could not calculate with deep (multi-layered) neural networks and the large amount of data they needed for learning. To do this, computers with high computing power first had to be developed. But we won't get much more computationally intensive computers. When building microprocessors, we are slowly reaching their atomic limits. If no error-free quantum computers can be developed, we will not land in a new era of computing.

But if our computers always lag behind our algorithms with their performance, how is it supposed to suddenly develop exponentially a crass maxi-super-intelligence? That would be like calculating the Schrödinger equation of a complex molecule with your fingers. It is possible that there will be super intelligence at some point, certainly not in the next 100 years. We'll also know long in advance whether it's possible.

Nevertheless, the media scares us as if there weren't any worse threats to humanity: wars! Terror! Diseases! Climate change! Mario Barth!

Artificial intelligence today

Our artificial intelligence programs today are artificial neural networks. These optimization methods can recognize patterns in very large data sets and differentiate things. However, after a lot of training. They are just a collection of interconnected points in a computer program. The connections between these points are gradually strengthened or weakened during the training of the network with the help of mathematics until the network provides an optimal answer to its task.

Such an AI program - trained on millions of living room images - can identify every object in the unknown photo of a living room better and faster than humans. But if you copy an elephant into the living room picture, the program is so confused that it cannot even tell the television set from the elephant. The program has no idea what a television is and what an elephant is. When an elephant shows up in a living room, the program confuses the chair with the sofa, it's that confused. If an elephant had been copied into all living room pictures during the training of the network, the network would not recognize a living room picture without an elephant as the picture of a living room.

If one shows many pictures to a constitutional neural network, it can learn the essential features of the objects in the pictures very well. After training, these objects are also recognized by the network in images that the network has never seen.

In the same way, you can train a recurrent neural network (or rather a long-short-term memory network) to find features in connected strings - as in translations between English and Japanese and between English and Korean. From this, the network forms a universal space of meanings (a universal representation of the semantic features). With the help of this representation, a second recurrent network can then translate texts from Japanese into Korean. And that although it was not trained on any Japanese-Korean translations.

This marvel is called zero shot translation. That is fantastic, but at the same time the highest that an artificial network can do: digging patterns out of large data sets. More is not possible! How is such a program supposed to dominate us? And why?

The real problem with AI

The problem is not that machine superintelligence will eventually dominate us. The problem is that people are already manipulating and controlling us with the help of artificial intelligence programs. Basically, it's our own fault. Why do we let online platforms make decisions about what to see, read, eat, and buy by following their “recommendations”? Don't we end up losing all of our freedom with our freedom of choice?

No matter whether Google's artificial intelligence tailors our search results on the Internet to us, Amazon offers us products that we “need”, Netflix suggests films that we “want” to see, or Facebook pours us messages and advertisements about products that we “want” to see “wish us”: You personalize your own bladder with the help of artificial intelligence programs down to the size of a rat hole. Is that the right future for artificial intelligence? As manipulation software?

But I admit that I'm a bit ambivalent about this myself: I don't just want to portray the online platforms negatively: Without Google, Facebook and the other tech companies, the current revolution in artificial intelligence would not exist: The great successes of Google's DeepMind in the treatment of eye diseases and in protein folding that can revolutionize our medicine. The Google Neural Machine Translation (Google) translator, which does amazing things. The mastery of image recognition in the AI ​​departments of Google and Facebook.

Nor should we forget the revolution in autonomous driving, which can have a positive impact on climate change. And and and. But what happens when “recommendations” become compulsory? What if a state takes control of online platforms that manipulate us, or uses such control and manipulation programs itself? This is common practice in dictatorships.

AI greetings from China

In another blog text I mentioned how I once asked a Chinese AI researcher how it is possible that China is slowly becoming a world leader in artificial intelligence. "We have the data!" He said. The ever faster computers were one of the two reasons for the current revolution in artificial intelligence. The second reason was the large data sets that were accumulated thanks to the Internet: Without huge amounts of data, no artificial neural networks can be trained.

Well-known online platforms such as Google or Amazon also need large data sets so that they can further develop their AI programs. Data is the oil of the age of artificial intelligence. And that's a big problem: the more data protection, the less data is available to the platforms, but the more dictatorships can become world leaders in artificial intelligence. Is that good for us? I definitely don't want Russia or China to be the AI ​​world champions. Unfortunately, China already became.

In a Chinese school, the eating behavior of the students was optimized with AI programs so that they did not eat too much cake. In the smart Chinese classroom, everything may soon be evaluated with AI programs: Whether the students are paying attention or are looking out of the window in class - that has already been tested.

In the same way, the Chinese rating system for the social behavior of citizens is supported by artificial intelligence. However, one should not forget that on Western online platforms our buying behavior has been rated by others for a long time - like on eBay.

At Amazon, a person who has only read one book in their life can tear Franz Kafka apart: "GÄÄÄÄÄHHHHHHNNNN", comments a reviewer of Kafka's "Das Schloß": "Soooooo many words and sooooooo little said." And SCHWUPP! Kafka is already getting a star instead of a starry sky that he deserves. Kitschy vampire novels, on the other hand, are highly valued. We got used to that. But is there a limit to getting used to control and manipulation? In the end, don't we get used to a “Black Mirror” future? In any case, broad impetus for this is coming from China.

Artificial neural networks now recognize faces better than humans. Cameras are being set up all over China. But Chinese recognition programs don't even need faces to give a person the correct name. His gait is enough.

The photo of a famous Chinese manager was shown on a large public screen because she was supposed to have illegally crossed a busy street - convicted by artificial intelligence and pilloried. Unfortunately, the camera flashed the manager of a passing bus with her photo attached. Is that super intelligent?

It also occurs to me: Because some species of monkey run on all fours in a row, these monkeys recognize each other better by their butts than we humans by their faces. What would Chinese artificial intelligence programs have to learn to differentiate now if humanity had not developed the upright gait?

Shouldn't we be asking ourselves to what extent China can become a role model for our western tech companies and governments when it comes to AI control? Aren't you already inspired by Beijing in the hunt for data? Will our turbo-capitalism be able to withstand the brave new world of control and manipulation with artificial intelligence programs like in China?

It goes without saying that we in the western democracies are much more conscious of our private data than the Chinese. There is nothing left for them to do but accept the control frenzy of their government. But are we really careful with our private data?

Deep fakes

At a school reading of mine, a photographer wanted to take pictures for an internal documentation for the library that had organized my reading. A third of the students were not allowed to be photographed at the request of their parents. After the show, however, all of the students wanted to take a selfie with me. A few minutes later, thanks to social networks, our selfies could be admired all over the world.

Our selfies can become a playground for artificial intelligence manipulation programs like deep fakes: If an artificial neural network like GAN (Generative Adversial Network) is trained with a few hundred selfies, the faces from the selfies can be used in every video and every photo build in. These fakes are now so good that they can only be unmasked by another artificial neural network. So if we want to provide enough material for blackmailers and manipulators, we should continue to happily post our selfies online - like me. Yes, I know, most blackmailers are only interested in celebrities, but who knows what else we can expect in terms of data abuse?

Whoever takes the lead in artificial intelligence will rule the world, said Vladimir Putin.It is time to think broadly about actually existing artificial intelligence than about science fiction and superintelligence: How could we save the earth and ourselves from ourselves with the help of artificial intelligence? At the same time, how do we prevent other people, whether in companies or in governments, from manipulating and controlling us with these programs? And to a degree that Orwell did not even dare to “dream of”.

  • Posted in: General, Consciousness, Computational Biology, Deep Learning Neural Networks, Brain, Google, Brain Research, AI, Convolutional Neural Networks, Artificial Intelligence, Artificial Neural Networks, Learning, Machine Language, Natural Language Processing, NLP, Recurrent Neural Networks, Superintelligence, Deep Neural Networks

Dear visitor,

welcome to my SciLogs blog "Brain & AI". I want to write about all possible aspects of artificial intelligence research here. I am very happy about every comment and discussion about it, because as my mother often said: "As long as the language is alive, people are not dead." I often post new things about artificial intelligence, artificial neural networks and machine learning on my Facebook page: Machine learning Here is something about my career: I studied chemistry at the Technical University of Munich and then did my doctorate at the Chair of Theoretical Chemistry at the Technical University on the origin of the genetic code and the double-strand coding in the nucleic acids. After my doctorate, I continued research there for a few years on the genetic code and the complementary coding on both strands of nucleic acids: Neutral adaptation of the genetic code to double-strand coding. Key words for my scientific work: Molecular evolution, theoretical molecular biology, bioinformatics, information theory, genetic coding. Currently I am a lecturer in artificial intelligence at the SRH Fernhochshule and the Spiegelakademie, an AI keynote speaker, writer, stage writer and science communicator. I.a. I am a two-time runner-up in the German-speaking Poetry Slam championships. My book “Doktorspiele” was filmed by the 20th Century FOX and was shown successfully in German cinemas in 2014. The new edition of the book was published by Digital Publishers. My non-fiction book on artificial intelligence "Is that intelligent or can it go away?" appeared in October 2020. Tessloff-Verlag publishes my children's crime novels "Data Detectives", beautifully illustrated by Marek Blaha, with a lot of reference to AI, robots and digital worlds. Have fun with my blog and all the discussions here :-). Jaromir