The AI pioneer talks about stepping down from Meta, limits of large language models.
See full article...
See full article...
Unironically said of how we also do AI.“We suffer from stupidity.”
That is definitely what the V-JEPA model is built on. (LINK) It's not really judgment. All of these large models are predictive models trying to match an input to an output. Being able to pattern match gravity...okay cool, but aside from more slop video generation, I'm not sure there's any groundbreaking change from this.I think he thinks judgment is an emergent property. That may be so, but I also think it'll be a long slog to develop the core elements from which it emerges. This kind of stuff strikes me as baby steps along the way, not the endgame.
Bad thing about the AI bubble is that when it pops you're heading for another AI winter.The best thing about the AI bubble is that it is leading to a lot more investment in fundamental AI research.
Not to quibble, but I just really don't need the lunch menu.
I dunno. First, it's an FT article - with all the baggage that it entails. Second, the hype and hubris of the meal corresponded well to the hype and hubris of the interview.Not to quibble, but I just really don't need the lunch menu.
It always amazes me how many otherwise intelligent people can say "You can't tell the truth! It's absolutely disastrous for my business model!" and no part of their brain realizes what a trap they set for themselves and walked into.“I’m sure there’s a lot of people at Meta, including perhaps Alex, who would like me to not tell the world that LLMs basically are a dead end when it comes to superintelligence,” he says.
It really cannot be overstated how far we are from those core elements or anything even vaguely resembling them or even understanding how much compute might be required to emulate them.I think he thinks judgment is an emergent property. That may be so, but I also think it'll be a long slog to develop the core elements from which it emerges. This kind of stuff strikes me as baby steps along the way, not the endgame.
“The greatest thing by far is to be a master of metaphor; it is the one thing that cannot be learnt from others; and it is also a sign of genius, since a good metaphor implies an intuitive perception of the similarity in the dissimilar.”
This is one of many points on which the tech industry has simply forged forward as if terms and definitions were simple and obvious, never having checked in with cognitive science at all.* noting of course that LeCun doesn't like the term "General" Intelligence. Are humans Generally intelligent if we can't do somethings that some animals find trivial?
It always amazes me how many otherwise intelligent people can say "You can't tell the truth! It's absolutely disastrous for my business model!" and no part of their brain realizes what a trap they set for themselves and walked into.
I came to the comments looking for a tl;dr version. Yours is a wee bit biased but I suppose it will have to do.TLDR: LeCun says emperor has no clothes and offers to try to figure out how to make a royal robe for him. Emperor says but I not naked! Not acceptable to emperor. LeCun now independent tailor still trying to learn the trade.
Damn, I thought intelligence was something the marketing department would figure out.
Do you want to get "I have no mouth and I must scream"? Because this is how you get "I have no mouth and I must scream""I want to teach AI to feel pain."
- Yann LeCun, paraphrased
Sounds more like a dichotomy with tension than an easy fit. Whether it's a productive one depends which trait prevails in a lifeboat.I’m a scientist, a visionary…
May we please require devoting 25% of the resources going to teaching machines how to problem-solve better, into improving education of humans?Unironically said of how we also do AI.
IMHO, if we need more intelligence, we need to not make it artificial. We are doing "intelligence" very stupidly, and it's only getting worse.
As with "AI" advocacy itself, superficialities are all there are.Not to quibble, but I just really don't need the lunch menu.
We can hope he stuck Zuck with the tab.IMO, nothing sets the "hubris before the fall" vibe quite like an overpriced, pretentious meal likely paid for with a company card.
Yeah, I was commenting on LeCunn's point of view - where he thinks he can make inroads on the problem. I agree they are probably trying to use screwdrivers to turn bolts. I agree they are probably not going to succeed with an approach like that.That is definitely what the V-JEPA model is built on. (LINK) It's not really judgment. All of these large models are predictive models trying to match an input to an output. Being able to pattern match gravity...okay cool, but aside from more slop video generation, I'm not sure there's any groundbreaking change from this.
Bad thing about the AI bubble is that when it pops you're heading for another AI winter.
The other bad thing, as pointed out in the interview itself, is that they're actually pulling back on fundamental research only months after splashing out billions to stand it up. All the money is going towards slight updates to existing, deployed tech in a desperate attempt to make it profitable. Tech that increasing numbers of people believe is a dead end.
IMO, nothing sets the "hubris before the fall" vibe quite like an overpriced, pretentious meal likely paid for with a company card.
One of these things is tremendously unlike the other.And ditto for any company operating in a fascist or socialist state!
The good thing about another AI winter is that it might give us breathing room to decide how to control this genie that we're trying to pull out of the bottle. It's clear enough (IMO) that the ability of LLMs to mimic humans (on the web, nobody knows that you're a dog--or an LLM) can lead to dangerous results, like the ability to put out tons of bad but convincing propaganda. Add to that the ability to actually do 1984-like monitoring using speech (and video) recognition, the temptation to trust "your" AI in realms ranging from stock exchanges to battlefields, the ability to hack into people's computers via the built-in AI (Microsoft Copilot, anyone?), and we've got a real mess.Bad thing about the AI bubble is that when it pops you're heading for another AI winter.
Au contraire... I was waiting to hear who was picking up the tab for lunch.Not to quibble, but I just really don't need the lunch menu.
I dunno. First, it's an FT article - with all the baggage that it entails. Second, the hype and hubris of the meal corresponded well to the hype and hubris of the interview.
Although the pregnancy status of the reporter is definitely in the too-much-information-thank-you category.
I’m not so sure. Significant capital investment and infrastructure is being built right now.
Even if we have economic shocks from a bubble popping, AI tech right now is massively under deployed.
Instead of an AI winter, I think we’d at least coast. Compute would be reallocated to more useful things (ordinary automation & process efficiency, medical tech, actually useful products, etc).
This is similar to what happened after the dotcom bubble and after the 2008 GFC.
I suspect they would both admit that it's just a guess.And at the same time Geoffrey Hinton says in interviews that LLMs might already be sentient because they "refuse" to be switched off and that world models are not the way to go. So go figure...