Crypto News

Meta’s Yann LeCun: Scaling AI Won’t Make It Smarter

For many years the AI ​​industry has complied with a set of principles known as “scaling laws.” Openai researchers described them in the seminal 2020 paper, “scale laws for neural language models.”

“The performance of the model depends on the scale, which is made up of three factors: the number of model N parameters (excluding the gem), the size of the dataset D, and the amount of compute C used for training,” those with -set write.

In essence, more when it comes to developing a highly smart AI. This idea has pledged a huge investment in data centers that allow AI models to be processed and learned from the large amount of existing information.

But recently, AI experts throughout the Silicon Valley began to challenge that doctrine.

“Most of the interesting -graceful size problems are so bad,” Meta's AI scientist Yann Lecun, told the National University of Singapore on Sunday. “You just can't assume that more data and more compute means smarter AI.”

Lecun's point depends on the idea that AI training in the broad value of the main topic, such as Internet data, will not lead to certain types of superintelligence. Smart Ai is a different race.

“The mistake is that very simple systems, when they work for simple problems, people force them to think they will work for complex problems,” he said. “They do some amazing things, but it creates a religion of scale that you just need to measure the systems more and they naturally become smarter.”

So far, the impact of the scale has been enlarged because many of the latest AI breakthroughs are really “really easy,” Lecun said. The largest language models today are trained almost the amount of information in the visual cortex of a four -year -old, he said.

“When you deal with real-world problems with ambiguity and uncertainty, it's not just about scaling now,” he added.

Ai advances slowed down only. This is because, in part, in a growing corpus of available public data.

Lecun is not the only known researcher to ask the power of the scale. The CEO of AI Alexandr Wang said scaling was “the biggest question in the industry” at the cerebral valley conference last year. CEO CEO Aidan Gomez called it a “dumb” way to improve AI models.

Lecun advocates for a more world -based training approach.

“We need AI systems that can learn new tasks quickly. They need to understand the physical world – not just text and language but the real world – have some degree of common sense, and capabilities to reason and plan, have a constant memory – all the things we expect from intelligent creatures,” he said in his talk Sunday.

Last year, in a phase of the Lex Fridman podcast, Lecun said that in contrast to large language models, that only their next steps can predict based on patterns, world models have a higher level of understanding. “The excessive ingredient of a world model is something that can predict how the world will change as a result of an action you can take.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblocker Detected

Please consider supporting us by disabling your ad blocker