Ecology of IntelligencesteemCreated with Sketch.

in bio •  5 years ago 

Ecology of Intelligence
A Talk By Frank Wilczek [7.23.19]

I don't think a singularity is imminent, although there has been quite a bit of talk about it. I don't think the prospect of artificial intelligence outstripping human intelligence is imminent because the engineering substrate just isn’t there, and I don't see the immediate prospects of getting there. I haven’t said much about quantum computing, other people will, but if you’re waiting for quantum computing to create a singularity, you’re misguided. That crossover, fortunately, will take decades, if not centuries.

There’s this tremendous drive for intelligence, but there will be a long period of coexistence in which there will be an ecology of intelligence. Humans will become enhanced in different ways and relatively trivial ways with smartphones and access to the Internet, but also the integration will become more intimate as time goes on. Younger people who interact with these devices from childhood will be cyborgs from the very beginning. They will think in different ways than current adults do.

FRANK WILCZEK is the Herman Feshbach Professor of Physics at MIT, recipient of the 2004 Nobel Prize in physics, and author of A Beautiful Question: Finding Nature’s Deep Design. Frank Wilczek's Edge Bio Page

ECOLOGY OF INTELLIGENCE

FRANK WILCZEK: I’m a theoretical physicist, but I’m going to be talking about the future of mind and intelligence. It’s not entirely inappropriate to do that because physical platforms are absolutely a fundamental consideration in the future of mind and intelligence. I would think it’s fair to say that the continued success of Moore’s law has been absolutely central to all of the developments in artificial intelligence and the evolution of machines and machine learning, at least as much as any cleverness in algorithms.

First I’ll talk about the in-principle advantages of artificial intelligence with existing engineering principles. Then I will talk about the enormous lead that natural intelligence in the world has, although there are obviously great motivations for having general-purpose artificial intelligence—servants, or soldiers, or other useful kinds of objects that are not out there. Then I’ll talk a little bit about the forces that will drive towards intelligence. Perhaps that’s superfluous here, but we’ve been talking about how improvements in intelligence are an end in themselves, but it’s worth at least saying why that’s going to happen. Finally, I’ll argue for an emphasis on a new form of engineering that is not being vigorously cultivated, and I’ll draw some consequences for what the future of intelligence will be.

One of the advantages of artificial over natural intelligence is that they're extraordinarily powerful quantitatively and qualitatively. Take speed, for instance. Transistors, which are the basic decision-making processes or information processors in modern computers, operate at 10 billion operations per second. If you were to ask how fast human brains notice that movies are a series of still images rather than a continuous image, it's about 40 per second. There’s a factor of a billion there, at least, plus an order of magnitude. Machines are a lot faster. They have much better error freedom and ability to correct errors. They operate digitally. Associated with that, they have the ability to download enormous amounts of information seamlessly and automatically.

Their architecture is known because they were built, so they're modular. You can add abilities to them, you can add programs, but you can also add senses. If you want them to, say, look at scenes in ultraviolet, you'd plug in an ultraviolet camera. They’re ready for quantum mechanics, so if quantum mechanics turns out to be an important way of processing information because it opens up new levels of parallel processing, then, again, you can plug it in as a module. And they have a very good duty cycle. They don’t need care and feeding and, most importantly, they don’t die.

Artificial intelligence has many advantages, so it’s almost paradoxical as to why they aren’t doing better than they are. What advantages does natural intelligence have in the present competition? For one thing, it’s much more compact. It makes use of all three dimensions, whereas existing semiconductor technology is basically two-dimensional. It’s self-repairing, whereas chips are very delicate and have to be made in expensive clean rooms. Lots of things can go wrong with artificial intelligence, and errors frequently make it necessary to shut down and reboot. Brains aren’t that way.

We have integrated input and output facilities—eyes, ears, and so forth—that have been sculpted over millions or billions of years of evolution to match the world we find ourselves in. We also have good muscular control of our bodies and speech. So, we have very good input and output facilities that are seamlessly integrated into our information processing. While impressive, those things are not at all outside the plausible domain of near future engineering. We know how to make things more three-dimensional. We know how to work around defects and maybe make some self-repair. There are clear ways forward in all those things, and there are also clear ways forward in making better input and output modules.

Although the input and output modules for human brains are very impressive, they by no means approach physical limits. Even your intelligent phone can make better images and computers can talk. In some very restricted areas there are physical limits, but we don’t exhaust physical limits, except in a few very exceptional cases. For instance, our resolution in space and time of vision, which is our best sense, is not that good. It only samples a limited part of the spectrum and even in that limited part of the spectrum takes three crude averages. We don’t sense polarization. Machines can do all those things.

Where humans do have a qualitative advantage—far beyond anything in existing engineering—is in the connectivity and development of their basic units. The brain is made out of tens, or tens of tens of billions of units, each of which is an impressive module. Then there are the glia that help along. These were made by processes of self-reproduction and exponential growth. Current engineering doesn’t have anything like that, where you have exponential growth of sophisticated units that self-reproduce. Brains also have enormous amounts of connectivity. Semiconductor technology has maybe a hundred connections per unit, whereas the brain has thousands.

These differences are so vast quantitatively that they count as being qualitative differences between current artificial intelligence engineering and natural intelligence. This is where natural intelligence has a big edge. And it gives a big utility.

I was very pleased to hear Alison’s talk first, because this touches on the learning algorithms and the learning process that humans use. They have this vast collection of neurons and connections and spend a lot of time getting rid of them and sculpting them. That’s the way human learning mainly works—by interacting with the world and getting feedback. Some connections get reinforced, while others get winnowed away. This has been discovered now to be a very powerful way of learning things in artificial neural nets. Real neural nets, however, are on another scale altogether because they’re bigger, better hooked up to the external world, and more connected.

Now I'd like to talk about why I think there will be an evolutionary drive towards increasing intelligence as an end in itself, a demand side as opposed to people who just want to make it better from a supply side. First of all, there are consumers. Human beings want to get an edge over other human beings by improving themselves, having better machine helpers. They’d also like to improve their children and have servants and so forth. Obviously, there’s a tremendous consumer demand. There’s also a military demand, which is worrisome for obvious reasons; namely, because the utility functions for military artificial intelligence are going to be things that could easily go awry.

Then there’s the drive towards exploration of space. Human bodies are very delicate; they are not radiation-hardened, they need water and supplies, and many things can wrong, as the Space Exploration program has shown. It would be much more efficient and inevitable to send cyborgs or artificial objects as the vanguards of space exploration. So, if we want to expand intelligence beyond the biosphere, that’s going to be an important drive. Let me draw some implications from these remarks, because they're meant also to stimulate discussion.

I don't think a singularity is imminent, although there has been quite a bit of talk about it. I don't think the prospect of artificial intelligence outstripping human intelligence is imminent because the engineering substrate just isn’t there, and I don't see the immediate prospects of getting there. I haven’t said much about quantum computing, other people will, but if you’re waiting for quantum computing to create a singularity, you’re misguided. That crossover, fortunately, will take decades, if not centuries.

There’s this tremendous drive for intelligence, but there will be a long period of coexistence in which there will be an ecology of intelligence. Humans will become enhanced in different ways and relatively trivial ways with smartphones and access to the Internet, but also the integration will become more intimate as time goes on. Younger people who interact with these devices from childhood will be cyborgs from the very beginning. They will think in different ways than current adults do.

Side by side with that, there will be autonomous intelligence and network intelligence. There will be a whole ecology of different kinds of powerful intelligence interacting with each other for decades. Now, that’s short on biological evolution timescales, but it’s reasonable on the timescale of human political and economic institutions. So, there will be the opportunity to evolve morality. That's a fortunate thing that there will be a possibility of learning by experience, interacting with different kinds of intelligence.

The idea that you can program morality, just like the idea that you can program other things that humans are good at, is very misguided. We just have to interact with the world and do them. That’s a big theme.

We’re very good at walking, at learning language, at constructing a three-dimensional world from partial information that arises in our retina, but we don’t know how we do any of those things. We learn to do them largely by interacting with the world. We understand even less how we learn morality or even what it is, but it comes from interacting with the world and other human beings. It’s fortunate that instead of a singularity there will be a time of coevolution, and that’s what the future of intelligence is going to look like.


ROBERT AXELROD: I agree with your statement that AI and military use could easily go awry and, therefore, we need to be quite cautious about it. What about the analogy that autonomous vehicles could go awry? They’re already ten times better than humans.

WILCZEK: That reminds me of the talk we just heard about extreme cases. All you have to do is have a runaway vehicle that breaks down somehow.

AXELROD: Okay, in terms of accidents per mile driven, maybe ten is too much.

RODNEY BROOKS: That statistic is way off base.

AXELROD: It’s not unreasonable to say that if they’re not there now, they will be at least 1.2 times better than humans. In other words, an insurance company would rather insure an autonomous vehicle than a teenager.

BROOKS: This is the popular view in the press, and it is very misguided.

PETER GALISON: Because there’s no data or because the data is the other way?

BROOKS: The data is much higher for cars, and the conditions under which they’re driving is very different from how humans are driving. When you couple human pedestrians and human drivers, things change dramatically.

WILCZEK: This is a good example of the dangers of trying to solve complicated problems a priori without experience. We need practical experience with these things.

BROOKS: There has been a total turnaround in the automobile industry in the last three months on the predictions of when they’re going to put cars out there. I’m actively involved in this area.

WILCZEK: The big message that I take from this analysis is that what's missing in artificial intelligence, and what humans do very well, is learn from the world. That’s a very powerful source of information. If you can take information directly from the world by interacting, it may look slow, it may look inefficient, but the bandwidth of what’s coming in is so enormous that it’s worth it. Learning by doing should not be underestimated.

IAN MCEWAN: What is the physicist's view of the chances of making a self-conscious machine? Is there something in the nature of matter?

WILCZEK: Well, I can’t speak for all physicists, but most physicists think that consciousness is an epiphenomenon. With all apologies, I don't think of it as a central problem.

SETH LLOYD: With due respect, consciousness is overrated. Ninety percent of the people I know are unconscious 90 percent of the time and the other 10 percent are unconscious 100 percent of the time.

There are different kinds of consciousness. There is the consciousness of running through the forest and not running into trees, which is a kind we’d like self-driving cars to have but they don’t, and then there’s a consciousness that I am a human being who is aware of myself as a human being—a kind of self-consciousness. This self-consciousness, which is the kind that humans often value and is the kind that they wonder whether machines can get, is what I’m talking about. Most human beings are unconscious by this definition.

JOHN BROCKMAN: Can an AI know what questions it should be asking? Or can it know what it doesn’t know?

WILCZEK: Definitely, yes. Well, any question that you ask related to whether an AI can do something that humans are known to do, the answer is yes because it’s overwhelmingly plausible that mind is based in matter. The human mind is based in matter and matter is what physics says it is.

There are certainly things about matter that we don’t know, but for all practical engineering purposes, we know the fundamental laws as well as they’re ever going to be known, and our knowledge is more than adequate to explain all observations. They’ve been tested in far more extreme conditions than you have in human brains, which are mild temperatures, mild densities, mild everything. Given that we know what matter is and that mind emerges from matter, we could in principle reproduce everything that goes on in a brain and nothing would be missing. I firmly believe, in that sense, natural intelligence is a special case of artificial intelligence. So, an engineered entity could do anything that a human can do.

BROCKMAN: So, when David Shaw was talking about his downloading brain capacity on a disc, you’re saying you just replicate the brain.

WILCZEK: Replicate its function. That’s very much a thought experiment. That’s not practical at all. But as a matter of principle, it’s hard to see how that could go wrong. People in physics do very delicate experiments and they have to correct for all kinds of possible sources of contamination.

NEIL GERSHENFELD: The mouse brain slicers are currently scanning brains down to the synaptic connection. We’re just at the edge of having the first data sets that are good enough to do that from scratch. They have these crazy electron microscopes that have 100 beams that do nanometer slices that read every single synapse that they then reconstruct in 3D, so it’s not that far off.

WILCZEK: That’s very far from having a functional brain or segment of a brain.

CAROLINE JONES: Getting back to your strong and beautiful statement about engagement with the world and this human model of vast synaptic proliferation and then synaptic cropping that brings you down to an adult consciousness: How does the physicist’s confidence in the material basis of consciousness jibe with this soft, meat machine creature that is in this environment? In other words, do you need to give Adam a skin made of sensory haptics? Do you need to have breath coming in and out of this machine to sense the world the way that you imagine the young human senses the world?

WILCZEK: No, I don't think it’s necessary. Well, it depends what you want to do. Of course, if you want to make a human companion that humans get along with.

JONES: I would argue from the experience of art, and life, and feminist arguments that the meat is a big part of the epiphenomenon. No one imagines AI as needing meat, so that’s part of my provocation. How much of the unconscious being in the world is part of the epiphenomenon that doesn’t interest you but is hypothetically possible?

DAVID CHALMERS: There's a big industry working on artificial meat.

WILCZEK: Let’s talk about the embodiment of intelligence. If the idea that interacting with the world is a vital part of achieving general high levels of intelligence efficiently, then some kind of receptive apparatus is important. I don't think it would have to look like a human body, but it wouldn't be a bad idea to have a skin that’s telling you about the local environment.

Should you have two eyes as opposed to three, or four, or six? Does the skin need to be made out of flesh as opposed to some kind of plastic? These are very negotiable questions unless you want to have autonomous intelligences that interact intimately with humans. In that case, because humans are accustomed to interacting with other humans, it might be good to have the artificial guys look as human as possible.

Also, if you want to have artificial intelligence that appreciates the human experience and can make accurate models of what humans are thinking about and what they're experiencing, then again, you may want to have fairly accurate mimicry.

ALISON GOPNIK: I’m genuinely unsure about this, but it is striking that as long as we’ve had language and certainly as long as we’ve had writing, it’s a real question about how abstract we can get intimate, close, social interactions. Think about Elizabeth Barrett and Robert Browning, right? It’s remarkable that with the technology of having a quill and a piece of paper, you can have a completely different medium that doesn’t look like a typical human medium of interaction at all. You seem to be able to get all the complexities, the interactions, and all the subtleties working just fine.

WILCZEK: They had a pretty good model of what they were dealing with.

BROOKS: Someone talked about the uncanny valley. I had tried with my graduate students when I was at MIT and I tried with people in my company to build a three-armed robot because we can optimize much more. I’ve never been able to get anyone to build a three-armed robot. They feel it’s too icky. They won't do it.

WILCZEK: Why don’t you have them do eight and call it an octopus?

BROOKS: I’m not saying they’re right, I’m just pointing out that there’s a barrier people have, which is not bounded rationality or anything like that. No matter what a robot looks like, how it looks is making a promise of what it’s going to deliver. And when that promise isn’t matched by what it does deliver, it’s really upsetting.

GOPNIK: Studies have just come out that kids do not experience an uncanny valley in the same way that adults do. You would have thought that that’s the natural state and then we have to overcome it, but it may be a result of a whole lot of experience with machines. Maybe this is a generational effect.

WILCZEK: You called it a "barrier," which may be appropriate. A barrier is something you can reach or get over, and once you’ve reached it maybe there’s a smooth path after that—acceptance. A large part of the unease is simply not knowing what to expect.

JONES: The classic formation of the uncanny valley is if it gets too close to the human, it’s profoundly disturbing. It’s experienced as a creepy freak, right? The eight arms would be the way to go and the twenty-eight eyes would be the way to go.

LLOYD: Frank, is your main point that this is going to happen, but it’s going to happen slowly?

WILCZEK: Yes.

LLOYD: We’re going to have time to get used to this, so maybe we should practice being nice to these artificial intelligences before we let them appoint our president.

WILCZEK: Yes. A feeling of humility and learning by doing, not only practical tasks but also the coevolution of the different kinds of intelligences, will be something that evolves and involves learning by doing.

BROOKS: In the relatively short term, and by short term I mean the next ten to twenty years, as we get more robots in our homes, largely driven economically by the need for elder care, those robots are going to have very different umwelts than humans. They’re going to have all the senses that are cheap as anything because they aren’t here. They’re going to be able to detect any Bluetooth device, any Wi-Fi device, they’re going to use Wi-Fi to be able to detect when someone is breathing, they’re going to be able to see the hotspots where someone was just sitting on the couch.

You have to have some intelligence to take care of it. But they’re going to have a very different sensory perception of the world that they’re sharing with us. How we get used to them will be interesting. Will there be certain species of robots with particular sorts of sensory stuff that we understand, or will we be continuously surprised by them knowing stuff about us that we didn’t expect them to know?

W. DANIEL HILLIS: Don’t you think we’ll have very different sets of perception by then, too?

BROOKS: Well, yes. Maybe not in that ten to twenty-year timeframe.

WILCZEK: There are two ways of talking about AI that are very common that are not appropriate, and it’s going to become increasingly clear that they’re not appropriate. One is to talk about AI in terms of "us versus them." They're our creations and we will be interacting in very intimate ways with them. They’ll be part of society.

The other thing that you were alluding to is that it’s common to talk of AI as if it’s one thing. Intelligence, whether natural or artificial, can take many forms. Natural intelligence is embodied in all kinds of animals, and maybe even in our digestive system and immune system. Artificial intelligence is all kinds at all levels. Some people would argue that thermostats are a form of artificial intelligence. Then you have distributed intelligence, you could have soldiers, you could have servants, and those would be very different kinds of minds.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
https://www.edge.org/conversation/frank_wilczek-ecology-of-intelligence