Meta’s AI Chief Yann LeCun on AGI, Open-Source, and AI Risk | Maqvi News

[ad_1]

Meta’s chief AI scientist, Yann LeCun, received another accolade to add to his long list of awards on Sunday, when he was recognized with a TIME100 Impact Award for his contributions to the world of artificial intelligence.

Ahead of the award ceremony in Dubai, LeCun sat down with TIME to discuss the barriers to achieving “artificial general intelligence” (AGI), the merits of Meta’s open-source approach, and what he sees as the “preposterous” claim that AI could pose an existential risk to the human race.

TIME spoke with LeCun on Jan. 26. This conversation has been condensed and edited for clarity.

Many people in the tech world today believe that training large language models (LLMs) on more computing power and more data will lead to artificial general intelligence. Do you agree?

It’s astonishing how [LLMs] work, if you train them at scale, but it’s very limited. We see today that those systems hallucinate, they don’t really understand the real world. They require enormous amounts of data to reach a level of intelligence that is not that great in the end. And they can’t really reason. They can’t plan anything other than things they’ve been trained on. So they’re not a road towards what people call “AGI.” I hate the term. They’re useful, there’s no question. But they are not a path towards human-level intelligence.

You mentioned you hate the acronym “AGI.” It’s a term that Mark Zuckerberg used in January, when he announced that Meta is pivoting towards building artificial general intelligence as one of its central goals as an organization.

There’s a lot of misunderstanding there. So the mission of FAIR [Meta’s Fundamental AI Research team] is human-level intelligence. This ship has sailed, it’s a battle I’ve lost, but I don’t like to call it AGI because human intelligence is not general at all. There are characteristics that intelligent beings have that no AI systems have today, like understanding the physical world; planning a sequence of actions to reach a goal; reasoning in ways that can take you a long time. Humans, animals, have a special piece of our brain that we use as working memory. LLMs don’t have that.

A baby learns how the world works in the first few months of life. We don’t know how to do this [with AI]. Once we have techniques to learn “world models” by just watching the world go by, and combine this with planning techniques, and perhaps combine this with short-term memory systems, then we might have a path towards, not general intelligence, but let’s say cat-level intelligence. Before we get to human level, we’re going to have to go through simpler forms of intelligence. And we’re still very far from that.

Read More: AI Learns to Speak Like a Baby

In some ways that metaphor makes sense, because a cat can look out into the world and learn things that a state-of-the-art LLM simply can’t. But then, the entire summarized history of human knowledge isn’t available to a cat. To what extent is that metaphor limited?

So here’s a very simple calculation. A large language model is trained on the entire text available in the public internet, more or less. Typically, that’s 10 trillion tokens. Each token is about two bytes. So that’s two times 10 to the [power of] 13 bytes for training data. And you say, Oh my God, that’s incredible, it will take a human 170,000 years to read through this. It’s just an insane amount of data. But then you talk to developmental psychologists, and what they tell you is that a 4-year-old has been awake for 16,000 hours in its life. And then you can try to quantify how much information got into its visual cortex in the space of four years. And the optical nerve is about 20 megabytes per second. So 20 megabytes per second, times 60,000 hours, times 3,600 seconds per hour. And that’s 10 to the [power of] 15 bytes, which is 50 times more than 170,000 years worth of text.

Right, but the text encodes the entire history of human knowledge, whereas the visual information that a 4-year-old is getting only encodes basic 3D information about the world, basic language, and stuff like that.

But what you say is wrong. The vast majority of human knowledge is not expressed in text. It’s in the subconscious part of your mind, that you learned in the first year of life before you could speak. Most knowledge really has to do with our experience of the world and how it works. That’s what we call common sense. LLMs do not have that, because they don’t have access to it. And so they can make really stupid mistakes. That’s where hallucinations come from. Things that we completely take for granted turn out to be extremely complicated for computers to reproduce. So AGI, or human-level AI, is not just around the corner, it’s going to require some pretty deep perceptual changes.

Let’s talk about open source. You have been a big advocate of open research in your career, and Meta has adopted a policy of effectively open-sourcing its most powerful large language models, most recently Llama 2. This strategy sets Meta apart from Google and Microsoft, which do not release the so-called weights of their most powerful systems. Do you think that Meta’s approach will continue to be appropriate as its AIs become more and more powerful, even approaching human-level intelligence?

The first-order answer is yes. And the reason for it is, in the future, everyone’s interaction with the digital world, and the world of knowledge more generally, is going to be mediated by AI systems. They’re going to be basically playing the role of human assistants who will be with us at all times. We’re not going to be using search engines. We’re just going to be asking questions to our assistants, and it’s going to help us in our daily life. So our entire information diet is going to be mediated by these systems. They will constitute the repository of all human knowledge. And you cannot have this kind of dependency on a proprietary, closed system, particularly given the diversity of languages, cultures, values, centers of interest across the world. It’s as if you said, can you have a commercial entity, somewhere on the West Coast of the U.S., produce Wikipedia? No. Wikipedia is crowdsourced because it works. So it’s going to be the same for AI systems, they’re going to have to be trained, or at least fine-tuned, with the help of everyone around the world. And people will only do this if they can contribute to a widely-available open platform. They’re not going to do this for a proprietary system. So the future has to be open source, if nothing else, for reasons of cultural diversity, democracy, diversity. We need a diverse AI assistant for the same reason we need a diverse press. 

One criticism you hear a lot is that open sourcing can allow very powerful tools to fall into the hands of people who would misuse them. And that if there is a degree of asymmetry in the power of attack versus the power of defense, then that could be very dangerous for society at large. What makes you sure that’s not going to happen?

There’s a lot of things that are said about this that are basically complete fantasy. There’s actually a report that was just published by the RAND Corporation where they studied, with current systems, how much easier does it make [it] for badly-intentioned people to come up with recipes for bioweapons? And the answer is: it doesn’t. The reason is because current systems are really not that smart. They’re trained on public data. So basically, they can’t invent new things. They’re going to regurgitate approximately whatever they were trained on from public data, which means you can get it from Google. People have been saying, “Oh my God, we need to regulate LLMs because they’re gonna be so dangerous.” That is just not true. 

Now, future systems are a different story. So maybe once we get a powerful system that is super-smart, they’re going to help science, they’re going to help medicine, they’re going to help business, they’re going to erase cultural barriers by allowing simultaneous translation. So there’s a lot of benefits. So there’s a risk-benefit analysis, which is: is it productive to try to keep the technology under wraps, in the hope that the bad guys won’t get their hands on it? Or is the strategy to, on the contrary, open it up as widely as possible, so that progress is as fast as possible, so that the bad guys always trail behind? And I’m very much of the second category of thinking. What needs to be done is for society in general, the good guys, to stay ahead by progressing. And then it’s my good AI against your bad AI.

You’ve called the idea of AI posing an existential risk to humanity “preposterous.” Why?

There’s a number of fallacies there. The first fallacy is that because a system is intelligent, it wants to take control. That’s just completely false. It’s even false within the human species. The smartest among us do not want to dominate the others. We have examples on the international political scene these days–it’s not the smartest among us who are the chiefs.

Sure. But it’s the people with the urge to dominate who do end up in power.

I’m sure you know a lot of incredibly smart humans who are really good at solving problems. They have no desire to be anyone’s boss. I’m one of those. The desire to dominate is not correlated with intelligence at all.

But it is correlated with domination.

Okay, but the drive that some humans have for domination, or at least influence, has been hardwired into us by evolution, because we are a social species with a hierarchical organization. Look at orangutans. They are not social animals. They do not have this drive to dominate, because it’s completely useless to them.

That’s why humans are the dominant species, not orangutans. 

The point is, AI systems, as smart as they might be, will be subservient to us. We set their goals, and they don’t have any intrinsic goal that we would build into them to dominate. It would be really stupid to build that. It would also be useless. Nobody would buy it anyway.

What if a human, who has the urge to dominate, programs that goal into the AI?

Then, again, it’s my good AI against your bad AI. If you have badly-behaved AI, either by bad design or deliberately, you’ll have smarter, good AIs taking them down. The same way we have police or armies.

But police and armies have a monopoly on the use of force, which in a world of open source AI, you wouldn’t have.

What do you mean? In the U.S., you can buy a gun anywhere. Even in most of the U.S., the police have a legal monopoly on the use of force. But a lot of people have access to insanely powerful weapons. 

And that’s going well?

I find that’s a much bigger danger to life of residents of the North American landmass than AI. But no, I mean, we can imagine all kinds of catastrophe scenarios. There are millions of ways to build AI that would be bad, dangerous, useless. But the question is not whether there are ways it could go bad. The question is whether there is a way that it will go right. 

It’s going to be a long, arduous process of designing systems that are more and more powerful with safety guardrails so that they are reliable, and safe, and useful. It’s not going to happen in one day. It’s not like one day, we’re going to build some gigantic computer and turn it on, and then the next minute it is going to take over the world. That’s the preposterous scenario.

One final question. What should we expect from Llama 3?

Well, better performance, most likely. Video multimodality, and things like that. But it’s still being trained. 

Maqvi News #Maqvi #Maqvinews #Maqvi_news #Maqvi#News #info@maqvi.com

[ad_2]

Source link