Where Do Minds Belong?

Caleb Scharf

Aeon

2016-07-13

“If machine intelligence is the inevitable end-point of both technology and biology, then perhaps the aliens are hyper-evolved machines so off-the-charts advanced, so far removed from familiar biological forms, that we wouldn’t recognise them if we saw them.”

“Similarly, we can imagine that interstellar machine communication would be so optimised and well-encrypted as to be indistinguishable from noise. In this view, the seeming absence of intelligent life in the cosmos might be an illusion brought about by our own inadequacies.”

“There is also a deeper message laid bare within our futurist projections. Our notions about the emergence of intelligent machines expose our fantasies (often unspoken) about what perfection is: not soft and biological, like our current selves, but hard, digital and almost inconceivably powerful.”

“To some people, such a future is one of hope and elevation. To others, it is one of fear and subjugation. Either way, it assumes that machines sit at the pinnacle of the evolution of consciousness.”

“But are living things really compelled to become ever-smarter and more robust? And is biological intelligence really a universal dead-end, destined to give way to machine supremacy?”

“Perhaps not. There is quite a bit more to the story.”

“Some of these extravagant ideas can be traced back to John von Neumann’s astonishing conjectures on self-replicating automata, which were compiled in his posthumous book, Theory of Self-Reproducing Automata (1966). That work helped cement the concept of machines building more machines, in an exponential and perhaps uncontained explosion that could simply swamp other life forms that get in the way. Von Neumann also considered how such machines could simulate some of the functions and actions of human neurons.”

“In the years since then, electronic connectivity certainly has had a huge impact on the way that many humans go about their daily lives, and even on the way in which we problem-solve and think about any new question or challenge.”

“Who of us in the connected modern world hasn’t Googled a question before even trying to work through the answer, or before asking another human being?”

“Part of our collective wisdom is now uploaded, placed in an omnipresent cloud of data.”

“One such reality is the issue of energetics – a topic discussed by Von Neumann, but often ignored in the futurist conversations.”

“In computer design, a key factor is computational capacity versus energy use, sometimes quoted as computations-per-joule. As microprocessors get more complex, and silicon-based architectures get smaller and smaller (these days, to the tens-of-nanometre scales), efficiency is still improving. As a result, the computations-per-joule ratio has been getting better and better with each passing year.”

“Except, that ratio has been getting better by less and less with each passing year. In fact, some researchers have stated that there might be an upcoming ‘wall’ of energy efficiency for conventional processing architectures, somewhere around 10 giga-computations-per-joule for operations such as basic multiplication.

That’s a big potential roadblock for any quest for true artificial intelligence or brain-uploading machinery. Estimates of what you’d need in terms of computing power to approach the oomph of a human brain (measured by speed and complexity of operations) come with an energy efficiency budget that needs to be about a billion times better than that wall.”

“To put that in a different context, our brains use energy at a rate of about 20 watts. If you wanted to upload yourself intact into a machine using current computing technology, you’d need a power supply roughly the same as that generated by the Three Gorges Dam hydroelectric plant in China, the biggest in the world. To take our species, all 7.3 billion living minds, to machine form would require an energy flow of at least 140,000 petawatts. That’s about 800 times the total solar power hitting the top of Earth’s atmosphere. Clearly human transcendence might be a way off.”

“One possible solution is to turn to so-called neuromorphic architectures, silicon designs that mimic aspects of real biological neurons and their connectivity. Researchers such as Jennifer Hasler at the Georgia Institute of Technology have suggested that, if done right, a neuromorphic system could reduce the energy requirements of a brain-like artificial system by at least four orders of magnitude. Unfortunately, that big leap would still leave a gaping hole in efficiency of a factor of 100,000 before reaching the level of a human brain.”

“On paper, at least, a ‘universal’ or Turing quantum computer could exist with effectively boundless computational capacity. The British physicist David Deutsch articulated this idea brilliantly, and a little archly, in his paper ‘Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer’ (1985). Notably, he left the details of how to accomplish such a feat as a problem for the reader to figure out.”

“A genuinely universal quantum computer could, in theory, simulate to any desired precision any finite physical system, including a mind, or other quantum computers for that matter.”

“Going quantum could also allow simulations to be made massively parallel, and for probabilistic tests to be completed incredibly quickly.”

“Despite enormous laboratory and theoretical progress in recent years, however, the practical realisation of such concepts is a very complicated challenge.”

“The problem of energy efficiency rears its head here, too. Manipulating the central currency of computation, the qubit – be it a cold atom or other quantum object – might require very little energy. But holding the components of a quantum computer in a state of coherence (with all those delicate quantum states delicately preserved) is enormously taxing, and can always rely on a host of support systems and engineering that will gobble up power.”

“It’s not clear that we know even roughly what the real-world computation-to-energy function for quantum computing will be.”

“In other words, the mathematics of exponentially improving machine intelligence could be sound, and yet the practical barriers could prove insurmountably steep.”

“I’d like to explore what happens if we meld the idea of slow growth for machine intelligence with the question of Fermi’s Paradox. Doing this is fun, but it’s also informative.”

“Let’s suppose that an advanced cosmic intelligence succeeds at converting itself to a machine form, or has been overtaken by its super-smart, but not exponentially better, machine creations. What happens next?”

“Because these machines are hemmed in by efficiency limits, there is a possibility that they’d end up looking at their past for new tricks to move forwards. One thing that they would know (as we already do) is that biology works, and it works extremely well.”

“Some researchers estimate that the modern human brain is at its computational limits, but it might require only a slightly cleverer machine to re-engineer such a complex organ. In other words, there could be a more optimal trajectory that leads away from machines and back to biology, with its remarkable energy efficiency.”

“There is also no guarantee that machine intelligences will be, or can be, perfectly rational. To engage with a complex universe, where mathematics itself contains un-provable theorems, a touch of irrationality might be critical seasoning.”

“Right now, we routinely speculate that the future of our intelligence lies in some other form, silicon or quantum perhaps, that we perceive to be superior to flesh. Perhaps the same theatre plays out for any intelligence. Machines might want to become biological again for practical reasons of energetics, or for other reasons that we cannot imagine or comprehend.”


Previous Entry Next Entry

« Labyrinth and the Deep, Dark Heart of Childhood A Hater for All Seasons »