Yann LeCun’s Post

View profile for Yann LeCun, graphic
Yann LeCun Yann LeCun is an Influencer

Can generative image models be good world models? This work from @Meta FAIR shows that there is a tradeoff between realism and diversity. The more realistic a generative model becomes, the less diverse it becomes. Realism comes at the cost of coverage. In other words, the most realistic systems are mode-collapsed. My hunch, supported by a growing amount of empirical evidence, is that world models should *not* be generative. They should make predictions in representation space. In representation space, unpredictable or otherwise irrelevant information is absent. This is the main argument in favor of JEPA (Joint Embedding Predictive Architectures). https://lnkd.in/em4aVMP3

  • No alternative text description for this image
Steven Song

Coding Bootcamp Graduate, History Student at Gordon College

3w

So how could we make generative model more diverse? Mr. LeCun. It matters

Jean Ibarz

PhD in Computer Science | Generative AI Enthusiast | Dev Scientist at Aura Aero

4w

What about this: realism implies higher cognitive loads, hence for equivalent model and training dataset size, higher realism implies harder learning hence less performance hence less diversity (cause diversity comes at cost of performance)

Phil Lunn

Future of AI Business Consultancy Pioneer | Business Growth Through Innovation | Fractional CxO Senior Business Leader | Artificial Intelligence Knowledge Boost | Product, Marketing and Sales Strategies |

4w

Trying to help those who follow the tech. side of AI, but are not gurus. Am I roughly right in this simplistic view-with a bit of AI help! Happy to be corrected! The idea is that instead of creating detailed pictures (like generative models do), it’s better to work with a simpler, more focused way of understanding the world. Imagine trying to teach a robot about different kinds of trees. Instead of having the robot draw every possible tree, it would be more effective to teach it what makes a tree a tree (the important features) and have it use this understanding to recognize trees in the future. Generative Models: Like a very detailed artist who might get stuck drawing the same perfect tree over and over, missing out on other types of trees. Representation Space: Like teaching the robot key features about trees so it can recognize any tree, even if it looks different. This method skips the unnecessary details and focuses on what’s important. JEPA concentrates on these key features in a simplified form, making the understanding and prediction more accurate and relevantIt avoids getting bogged down in unnecessary details and ensures the robot (or model) can make better predictions and understandings about the world around it

Thijs van den Berg

Head of AI & Quantitative Research, Shell Asset Management Company

3w

Doesn't that move the collapse problem to the representation encoder?

Roumen Popov

DSP Software Engineer

4w

"The more realistic a generative model becomes, the less diverse it becomes." - that looks like the bias-variance trade off, which is btw optimally (in a mathematical sense) solved by Bayesian inference models

Uchenna Nkemjika Aguoji

Helping grow startups, adding value to established companies

4w

"world models should *not* be generative." 🤨🧐🤯

Rick Bullotta

Resist the AI Oligarchs ✊🏼

3w

Not so sure about that. In the real world, “unpredictable or otherwise irrelevant information” is pervasive.

I am confused; you keep advocating JEPA, which is nothing special as anyone can test it on GitHub. What surprises me is why you don't apply the methodology from the above paper (since you are Meta Chief Scientist) and include JEPA to prove your point. So far, all I get is that JEPA is the solution, yet there is no demonstration that compares its results with generative models to establish it as a candidate for a world model. As a chief scientist, I encourage you to test your hypothesis. Science is about evidence, not just being "in favour," but about solid reproducible models that compare JEPA with generative models.All the best

Like
Reply
Daniel Gross, PhD

Founder @ Modal <> AI | Automated Compliance & Oversight | Increase Quality & Reduce cost of error

4w

That's an important insight and perhaps a vote for ontological base models in realistic spaces

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics