Hasn't the singularity already happened in humans?

Tech euphorics see a new apocalypse coming - should that be taken seriously?

For some it would be a big bang of intelligence, for others it would be nothing but a black hole: Nobody knows what happens when the so-called singularity comes and machines outstrip human thinking. But worse than the realization of the present fantasies could be their failure.

Who's Afraid of the Singularity? Nobody. Until recently, very few people knew more than the physical use of the term - those who heard it thought of the limits of our space-time continuum, of the big bang and black holes. But that is changing. More and more, mostly young people, are now associating the term with Vernor Vinge, Ray Kurzweil and the thesis that artificial intelligence will come to the point in the next few years where it no longer needs human intelligence, ever faster further developments of itself begin and that Universe filled with the glow of her new spirit.

Hardly anyone is afraid of it. There have been too many technological thoughts of salvation. And this one is just too megalomaniac, crazy and unworldly to be taken for full.

New spirit, new world

And when it comes, the singularity? After all, Elon Musk warns again and again of the development of artificial intelligences becoming autonomous - and he should know what he is talking about. After all, in addition to Tesla, SpaceX and Hyperloop with OpenAI and Neuralink, Musk also has a close eye on the development of artificial intelligence and its networking with the human brain. So maybe one should think about it after all.

How Ray Kurzweil came up with his singularity thesis in 2006 can be quickly summarized. For him, the history of the universe can be divided into six successive epochs: First came inanimate matter (1): Kurzweil understands it as information that makes use of physical and chemical exchange processes. Life followed (2): information that makes use of DNA and biological exchange processes. Then the brains emerged (3): for Kurzweil, information that makes use of neuronal exchange processes. With humans, a new form of communication in language, writing and data processing emerged. Kurzweil understands this as information that makes use of technological exchange processes (4). Some time ago we entered the symbiosis of human and machine intelligence (5).

Within the next ten to fifteen years, this development should then lead to a state in which machine intelligence is no longer dependent on humans (6). Computers leave slow human thinking behind, develop mutually in ever faster, self-potentiating sequence - which leads to an unimaginable flood of inventions in ever faster and self-potentiating sequence. For Kurzweil this is a big bang of intelligence, for Musk a black hole, for both an apocalypse: a new world that leaves old humanity behind and lets a new spirit rule.

For Kurzweil this is a big bang of intelligence, for Musk a black hole, for both an apocalypse.

But how close is the singularity really? If one understands the operations of the human brain as computing power, then mankind has been left behind for a while compared to the so-called supercomputers: that pocket calculators are better at "mental arithmetic" than we are is a childhood experience even for older generations - just like the reverse : We know that what we are good at ourselves is not just computing power.

Now, of course, we are no smarter than the computer developers who recognized this problem some time ago. So what are you sitting at right now? The hypothesis that singularity is only possible where computers have an intentional consciousness based on a feeling for the state of mind plays a major role. Obviously there is one problem above all: computers are - as the English word suggests - performers of computing power; and you can only do something whose parameters and goals have been defined in advance. Even if there are now self-learning computers that (like Google's AlphaGo) can play Go in a “creative” way and develop their own strategies in it, they are definitely still computers in this sense. AlphaGo could never think of a thought like: "I can no longer find any opponents in Go - maybe I should play chess one day!" Computers lack a sense of the meaning of a situation in which they find themselves and the directionality that goes with it: the intentional consciousness.

The problem of the body

The cognitive philosopher David Chalmers speaks here of the "hard problem of consciousness", and long before him the phenomenologist Hubert Dreyfus, who was involved in early computer developments, derived this problem from the fact that computers were developed from top to bottom. As soon as they could perform the most difficult cognitive functions for humans, such as complicated calculations, it was thought that the rest would no longer be a big problem. But the only reason why people find the higher cognitive performance so difficult is because they are very late adaptations from an evolutionary point of view.

In contrast to computers, which were primarily developed for this purpose, people are therefore not particularly good at mental arithmetic - but they are very good at walking down a flight of stairs or feeling the change in mood. Computers, on the other hand, beat the best human chess and go players, but one look at the RoboCup (soccer world championship of robots) is enough to see that it will be decades before a team of humanoid robots in a circular class the e-youth can exist.

If Dreyfus (and Chalmers too) are right, then here, i.e. in physicality, emotionality and sensuality, the key to intentionality and life lies in parameters that are not always clearly defined in advance. But how are computers supposed to get their data to somehow "feel" to them, that what they calculate is subject to an intentional tension - when they lack the body?

People provide sensitivities

Attempts to solve the problem currently seem to be proceeding broadly in three interrelated directions. Firstly, they are working on the development of “neuronally” functioning hardware that is modeled on the human brain or is to be created from the coupling with it. If this variant is fruitful and leads it to the singularity, then this comes suddenly and can run roughly as Kurzweil imagined it. One can only have doubts about his enlightenment fantasy, because the need to equip computers with emotive sensitivities would show one thing above all else: In this scenario, it would not be rationality, not the computing power guided by pure reason that brought about the singularity - it would be instead, the supposedly “irrational”, the sensual, emotional and physical.

From this perspective, people appear to be the organs of feeling in a machine - the more precise data they leave behind, the easier it is to gradually replace this organ.

The second path to singularity would be through the networking of existing artificial intelligences, which - according to the American Ben Goertzel's plan - would be connected across the world, similar to brain regions.

The third way is that of big data: for a while now, software has been able to access data on a large scale on physical and mental well-being. Photos, videos and texts in social networks, health apps, movement data, sound recordings from Siri and Alexa, biometric face recognition: all of this data allows precise profiles of preferences, moods, health conditions and enables software to work in a human-friendly manner.

The same process can also be described the other way round, namely as a process in which networked machines acquire intentions, sensitivities and desires that they would not (yet) be capable of on their own. From this perspective, people appear to be the organs of feeling in a machine - and the more and more precise data they leave behind, the easier it is to gradually replace this organ: a slower path into the singularity.

Bet on the unknown

Presumably all attempts fail. Neuroscientists know how little the human brain really is; Philosophers know how little they can define consciousness. The bet of computer developers on the singularity is therefore a bet on the technical realization of something that you don't know how it works - not even how you can actually define it.

But this insight can hardly be reassuring. Because even worse than the success of one of the strategies could be its failure: The history of computer development has so far been characterized by two constants. First, very rarely was what was supposed to be made. Second, what you actually made always had consequences that you hadn't foreseen. In this respect, something unpredictable can also be expected in terms of singularity. With machines, for example, that could empower themselves more and more through exponential self-development - while humanity would at best speculate that the divine, intelligent superiority revolves around nothing but a soulless void. Nothing could be changed about that.

Jan Söffner is professor for cultural theory and analysis at the Zeppelin University Friedrichshafen.