Watch – 21 Lessons for the 21st Century | Yuval Noah Harari | Talks at Google

Watch – 21 Lessons for the 21st Century | Yuval Noah Harari | Talks at Google

 

From the beginning.

https://youtu.be/Bw9P_ZXWDJU

 

From the quote below.

https://youtu.be/Bw9P_ZXWDJU?t=3186

 

YUVAL NOAH HARARI: “As I said in the very beginning, I don’t think we can predict the future, but I think we can influence it. What I try to do as a historian– and even when I talk about the future, I define myself as a historian, because I think that history is not the study of the past. History is the study of change, how human societies and political systems and economies change.  And what I try to do is to map different possibilities rather than make predictions.

 

This is what will happen in 2050. And we need to keep a very broad perspective.  One of the biggest dangers is when we have a very narrow perspective, like we develop a new technology and we think, oh, this technology will have this outcome.  And we are convinced of this prediction, and we don’t take into account that the same technology might have very different outcomes. And then we don’t prepare.

 

And again, as I said in the beginning,  it’s especially important to take into account the worst possible outcomes in order to be aware of them. So I would say whenever you are thinking about the future, the future impact of a technology and developing, create a map of different possibilities.  If you see just one possibility, you’re not looking wide enough.  If you see two or three, it’s probably also not wide enough. You need a map of, like, four or five different possibilities, minimum.” 

 

 

 

AUDIENCE QUESTION:

https://youtu.be/Bw9P_ZXWDJU?t=3289

 

Hey, Mr. Harari.

So my question is– I’ll start very broad, and then I’ll narrow it down for the focus. I’m really interested in, what do you think are the components that make these fictional stories so powerful in how they guide human nature?

And then if I narrow it down is, I’m specifically interested in the self-destruction behavior of humans. How can these fictional stories led by a few people convince the mass to literally kill or die for that fictional story?

 

YUVAL NOAH HARARI:

 

It again goes back to hacking the brain and hacking the human animal. It’s been done throughout history, previously just by trial and error, without the deep knowledge of brain science and evolution we have today.

But to give an example, like if you want to convince people to persecute and exterminate some other group of people, what you need to do is really latch onto the disgust mechanisms in the human brain. Evolution has shaped homo sapiens with very powerful disgust mechanisms in the brain to protect us against diseases, against all kinds of sources of potential disease. And if you look at the history of bias and prejudice and genocide, one recurring theme

is that it repeatedly kind of latches onto these disgust mechanisms.  And so you would find things like women are impure, or these other people, they smell bad and they bring diseases. And very, very often disgust is at the center.

So you’ll often find comparison between certain types of humans and rats or cockroaches, or all kinds of other disgusting things.

 

So if you want to instigate genocide, you start by hacking the disgust mechanisms in the human brain.  And this is very, very deep. And if it’s done from an early age, it’s extremely difficult afterwards.  People can– they know intellectually that it’s wrong to say that these people are disgusting, that these people, they smell bad. But they know it intellectually. But when you place them, like, in a brain scanner, they can’t help it. If they were raised– I mean, so we can still do something about it. We can still kind of defeat this.  But it’s very difficult, because it really goes to the core of the brain.

 

WILSON WHITE:

 

So I’ll end on a final question, because we’re at time. When Larry and Sergey, when they founded Google, they did so with this deep belief in technology’s ability to improve people’s lives everywhere. So if you had a magic wand and you could give Google the next big project for us to work on, in 30 seconds or less, what would you grant us as our assignment?

 

YUVAL NOAH HARARI:

 

An AI system that gets to know me in order to protect me and not in order to sell me products or make me click on advertisements and so forth.

 

WILSON WHITE:

 

All right.  Mission accepted.

 

[LAUGH]

 

Thank you, guys.

 

[APPLAUSE]

 

 

From the beginning.

https://youtu.be/Bw9P_ZXWDJU