Computer scientist Yann LeCun: “Intelligence really is about learning”

Post content hidden for low score. Show…
Post content hidden for low score. Show…

S-T-R

Ars Scholae Palatinae
606
I think he thinks judgment is an emergent property. That may be so, but I also think it'll be a long slog to develop the core elements from which it emerges. This kind of stuff strikes me as baby steps along the way, not the endgame.
That is definitely what the V-JEPA model is built on. (LINK) It's not really judgment. All of these large models are predictive models trying to match an input to an output. Being able to pattern match gravity...okay cool, but aside from more slop video generation, I'm not sure there's any groundbreaking change from this.

The best thing about the AI bubble is that it is leading to a lot more investment in fundamental AI research.
Bad thing about the AI bubble is that when it pops you're heading for another AI winter.

The other bad thing, as pointed out in the interview itself, is that they're actually pulling back on fundamental research only months after splashing out billions to stand it up. All the money is going towards slight updates to existing, deployed tech in a desperate attempt to make it profitable. Tech that increasing numbers of people believe is a dead end.

Not to quibble, but I just really don't need the lunch menu.

IMO, nothing sets the "hubris before the fall" vibe quite like an overpriced, pretentious meal likely paid for with a company card.
 
Last edited:
Upvote
76 (86 / -10)

ColdWetDog

Ars Legatus Legionis
14,402
Not to quibble, but I just really don't need the lunch menu.
I dunno. First, it's an FT article - with all the baggage that it entails. Second, the hype and hubris of the meal corresponded well to the hype and hubris of the interview.

Although the pregnancy status of the reporter is definitely in the too-much-information-thank-you category.
 
Upvote
88 (99 / -11)

Snark218

Ars Legatus Legionis
36,775
Subscriptor
“I’m sure there’s a lot of people at Meta, including perhaps Alex, who would like me to not tell the world that LLMs basically are a dead end when it comes to superintelligence,” he says.
It always amazes me how many otherwise intelligent people can say "You can't tell the truth! It's absolutely disastrous for my business model!" and no part of their brain realizes what a trap they set for themselves and walked into.

Gratifying to know that one of the foremost experts is willing to say this, though. I took a couple of semesters of cognitive science in college, and I'm not within leagues of expertise on that topic - but it seems so obvious to me that an LLM is not a path to anything definable as intelligence (let alone superintelligence) that it's always slightly bemusing that it's even a topic of conversation.
 
Upvote
97 (100 / -3)

Snark218

Ars Legatus Legionis
36,775
Subscriptor
I think he thinks judgment is an emergent property. That may be so, but I also think it'll be a long slog to develop the core elements from which it emerges. This kind of stuff strikes me as baby steps along the way, not the endgame.
It really cannot be overstated how far we are from those core elements or anything even vaguely resembling them or even understanding how much compute might be required to emulate them.
 
Upvote
16 (18 / -2)

Sadre

Ars Scholae Palatinae
1,013
Subscriptor
These tech people have such unimaginative ideas about who we are.

Learning what? And how? And from where?
“The greatest thing by far is to be a master of metaphor; it is the one thing that cannot be learnt from others; and it is also a sign of genius, since a good metaphor implies an intuitive perception of the similarity in the dissimilar.”

Aristotle, Poetics. These tech people sail their capital ships on thin ice, and are asking us to let it slide.

What is the function of metaphor, is not even clear. Is metaphor really an explanatory tool, or is it a just an idiosyncrasy of human intelligence? Metaphors are rather labor intensive, if explanation is your sole interest.

It's like trying to eat peas with a knife, I say.
 
Upvote
2 (15 / -13)

peterford

Ars Praefectus
4,272
Subscriptor++
This is an interview that gives a strong whiff of someone who barely has any time for some of their former colleagues and intentions. Fun to read.

From my limited knowledge I think LeCun is completely correct that LLMs won't achieve true intelligence. But that doesn't mean they can't be useful (with care) and impactful - in both good and bad senses.

More fundamental research for true AGI* is needed. I think AGI would be a good (world changing if affordable) thing in general. So if LLM hype gives some cover for the fundamental research, maybe there's a small sliver of light.

* noting of course that LeCun doesn't like the term "General" Intelligence. Are humans Generally intelligent if we can't do somethings that some animals find trivial?
 
Upvote
19 (24 / -5)

Snark218

Ars Legatus Legionis
36,775
Subscriptor
* noting of course that LeCun doesn't like the term "General" Intelligence. Are humans Generally intelligent if we can't do somethings that some animals find trivial?
This is one of many points on which the tech industry has simply forged forward as if terms and definitions were simple and obvious, never having checked in with cognitive science at all.
 
Upvote
51 (51 / 0)
Post content hidden for low score. Show…
Post content hidden for low score. Show…
Is there something more than there sounds to his 'new' model?

It's true that LLMs are largely language based because that's what you threw into the probabilistic grinder; but there are also ones that we've thrown images and videos into the probabilistic grinder for and they can produce images and video with the same sort of clearly-blinkered-but-sometimes-good-enough vibe that the ones that do text do for text.

Definitely nothing that suggests that we just need more videos of billiard balls in motion to reach an expert system that understands(much less synthesizes) Newtonian mechanics; or even that it's for want of video that such a thing doesn't happen (it's probably not how a human would do it, since most of them can see; but what we call 'physics' is an exercise you can do in a mixture of text data and mathematical notation if you prefer to do it without diagrams; so the fact that LLMs chewing on centuries of astronomical predictions produce little of interest does not seem to be a defect of not being visual learners).

I'm also delighted to see that FT has their eyes on what matters; like interview food. The big questions.
 
Upvote
46 (46 / 0)

TheShark

Ars Praefectus
3,114
Subscriptor
It always amazes me how many otherwise intelligent people can say "You can't tell the truth! It's absolutely disastrous for my business model!" and no part of their brain realizes what a trap they set for themselves and walked into.

I wish that were true, but all the evidence in the world today says that being a serial liar works out great. From what I can tell, getting in trouble for lying just means you didn't lie enough.
 
Upvote
66 (66 / 0)

richardbartonbrown

Wise, Aged Ars Veteran
114
Subscriptor++
TLDR: LeCun says emperor has no clothes and offers to try to figure out how to make a royal robe for him. Emperor says but I not naked! Not acceptable to emperor. LeCun now independent tailor still trying to learn the trade.
I came to the comments looking for a tl;dr version. Yours is a wee bit biased but I suppose it will have to do.
 
Upvote
1 (5 / -4)

meliant

Smack-Fu Master, in training
58
I might be very wrong, but it is at least refreshing to see someone willing to invest in something other than LLMs. No wonder that this world-based, not-language based, approach seems to show results in physical environments e.g. jet engines.

As someone with a background in biological research, I miss not seeing any of these AI gurus proposing to create intelligence the way nature "created" it: through natural selection of variations enabled/caused by mutations and genetic shuffling via sex. They focus on the idea of singularity coming from some model improving itself in new copies, etc -- has this ever happened to a type of creature in nature?

It seems obvious to me that statistical models trained in pre-existent information and without any source of variability of natural selection will never "generate" anything new, only re-combinate what is already existent. They only create the illusion of intelligence because they can access much more information than we can.
 
Upvote
27 (34 / -7)
Post content hidden for low score. Show…

graylshaped

Ars Legatus Legionis
68,040
Subscriptor++
I’m a scientist, a visionary…
Sounds more like a dichotomy with tension than an easy fit. Whether it's a productive one depends which trait prevails in a lifeboat.

I do like re-labeling this as Artificial Machine Intelligence--again, with all the caveats regarding the lack of agreement on what constitutes "intelligence."

Wow, this reporter sure wants us to be impressed with her.
 
Upvote
9 (14 / -5)

graylshaped

Ars Legatus Legionis
68,040
Subscriptor++
Unironically said of how we also do AI.

IMHO, if we need more intelligence, we need to not make it artificial. We are doing "intelligence" very stupidly, and it's only getting worse.
May we please require devoting 25% of the resources going to teaching machines how to problem-solve better, into improving education of humans?
 
Upvote
27 (28 / -1)
That is definitely what the V-JEPA model is built on. (LINK) It's not really judgment. All of these large models are predictive models trying to match an input to an output. Being able to pattern match gravity...okay cool, but aside from more slop video generation, I'm not sure there's any groundbreaking change from this.


Bad thing about the AI bubble is that when it pops you're heading for another AI winter.

The other bad thing, as pointed out in the interview itself, is that they're actually pulling back on fundamental research only months after splashing out billions to stand it up. All the money is going towards slight updates to existing, deployed tech in a desperate attempt to make it profitable. Tech that increasing numbers of people believe is a dead end.



IMO, nothing sets the "hubris before the fall" vibe quite like an overpriced, pretentious meal likely paid for with a company card.
Yeah, I was commenting on LeCunn's point of view - where he thinks he can make inroads on the problem. I agree they are probably trying to use screwdrivers to turn bolts. I agree they are probably not going to succeed with an approach like that.

Where I'm not 100% sure I agree is that there's no fairly straightforward way to use mass processing to produce judgment. A few years ago, text processing of the type we have now was a fantasy. What was needed was insight at the fundamental level of how to apply vector processing to huge datasets to spot patterns.

I'm not at all sure someone like LeCunn is a good person to have involved, because he's old and that tends to mess up creative thinking. Whether he wants to or not, he is likely to push work in directions that match his prior successes out of ordinary human bias. Probably new blood will be better. This is a thing capitalism excels at, though, so there are some forces that are aligned to push the work forward naturally.

What's needed for judgment is insight at the fundamental level of how to apply vector processing to simulate or do analogous computing that produces similar results to human judgment. We will need new tech -- new software or new hardware, TBD. But I think there's a decent chance our overall approach will crack the problem. Not LLMs certainly. But large scale vector processing? Quite possibly.
 
Upvote
-18 (4 / -22)
Whatever the models and data used - language, visual, etc. - all of them will need to KNOW us (bothcollectively and individually) to be truly useful for us. But how can we trust any model, any company in our capitalist system? Let some company, any company know 'everything' about me so it can manipulate me for its own financial benefit? NO!

And then, there're huge trust gaps too for any company operating in fascist or socialist states!

To me, the two giant hurdles to AI are technology and data. The latter will be difficult without trust. And who can trust these companies/governments/nation states?
 
Last edited:
Upvote
-6 (5 / -11)

mcswell

Ars Scholae Palatinae
990
Bad thing about the AI bubble is that when it pops you're heading for another AI winter.
The good thing about another AI winter is that it might give us breathing room to decide how to control this genie that we're trying to pull out of the bottle. It's clear enough (IMO) that the ability of LLMs to mimic humans (on the web, nobody knows that you're a dog--or an LLM) can lead to dangerous results, like the ability to put out tons of bad but convincing propaganda. Add to that the ability to actually do 1984-like monitoring using speech (and video) recognition, the temptation to trust "your" AI in realms ranging from stock exchanges to battlefields, the ability to hack into people's computers via the built-in AI (Microsoft Copilot, anyone?), and we've got a real mess.

That's bad enough, but the next generation of AI will come with even more dangerous capabilities. The longer we can put that off (at least until the next US president!), the more chance we have to do something about it.
 
Upvote
20 (21 / -1)

TenacityOverAptitude

Ars Centurion
207
Subscriptor++
I dunno. First, it's an FT article - with all the baggage that it entails. Second, the hype and hubris of the meal corresponded well to the hype and hubris of the interview.

Although the pregnancy status of the reporter is definitely in the too-much-information-thank-you category.

Many culture reporters interview in this conversational style. It provides a place for you to be sitting at the table with them, having a chat. I find it more entertaining to present the details of the people as a story, rather than just the facts.
 
Upvote
29 (34 / -5)
I’m not so sure. Significant capital investment and infrastructure is being built right now.

Even if we have economic shocks from a bubble popping, AI tech right now is massively under deployed.

Instead of an AI winter, I think we’d at least coast. Compute would be reallocated to more useful things (ordinary automation & process efficiency, medical tech, actually useful products, etc).

This is similar to what happened after the dotcom bubble and after the 2008 GFC.

Aside from the truly dire financials, which make me wonder how we can describe AI tech as 'massively under deployed'; there's the problem (for research) that 'AI winter' doesn't necessarily mean that nothing is being done in vaguely related areas(eg. the 1970s and 1980s were massive for computing generally; but DARPA would probably punch you for trying to submit an 'AI' related grant without at least weasel wording it to be 'digital signal processing' or something.

It's certainly possible, especially if we get an Iridium-like scenario where a bunch of first-moving suckers take a multibillion dollar bath and H200s, at least until they start to die, are available basically for cost of electricity, that anyone who wants do do 'AI' research that neatly fits the type of compute being built out will be OK; but it's also likely that there will be comparatively little money for novel R&D compared to the amount of demand for people who will grind marginal improvements out of LLMs to try to make them profitable; which won't help the R&D people much if they think that those are a blind alley.

It's sort of like a situation where someone discovers that 'intelligence' is implemented by neurons, and after some impressive early experiments advocates spending $1.5 trillion or so on new chicken hatcheries. You'll definitely produce lots of neurons and modestly-sophisticated entities that way; but you aren't necessarily going to scale up to Einstein no matter how many chickens you have. And, compared to LLMs, chickens are practically family to the brains we are trying to emulate or surpass.
 
Upvote
16 (17 / -1)

LMKerrow

Smack-Fu Master, in training
12
The human factors group (mainly human factors engineers, psychologists, design researchers) I once supported did extensive research into how sensing and processing spoken language affected human performance in various use cases (such as operating a vehicle). A clear conclusion was that this activity hogged available resources. This is one reason why it's a bad idea to have any form of conversation while driving: it significantly degrades one's situational awareness. One test conducted involved measuring the width of a subject's peripheral field of vision while having a telephone conversation. The results showed a dramatic narrowing of the field of view (like tunnel vision) and a lessened ability to detect and respond to peripheral threats.

Would this be true if the human brain was constructed to process language?

I mention this because even the experts I worked with didn't fully understand how the brain works, let alone claim that a comparable artificial intelligence could be engineered.
 
Upvote
12 (12 / 0)
And at the same time Geoffrey Hinton says in interviews that LLMs might already be sentient because they "refuse" to be switched off and that world models are not the way to go. So go figure...
I suspect they would both admit that it's just a guess.

Probably the one solid belief in the field is that nobody knows what the fuck is going on really.

You just have to build the thing and see what it does. (or if you're Hinton, you don't.)
 
Upvote
-2 (3 / -5)