Nvidia CEO tries to explain why DLSS 5 isn’t just “AI slop”

gruberduber

Wise, Aged Ars Veteran
167
It is amusing when we tell tech CEOs or whoever "We don't want that", and what they hear is "We will maybe want that once you make it better", but what they should be hearing is "The product/technology you are offering me is fundamentally designed to do a thing that I do not want to be done".
1774057984585.png
 
Upvote
171 (172 / -1)

Totally Radical Liberal

Ars Scholae Palatinae
1,338
Subscriptor
If you're explaining, you're losing
It's worse than that. The explanation is either confused or lying because other sources have clarified that the inputs are the standard inputs available to DLSS--which means no geometry data, except the depth buffer. That means that DLSS5 truly is just a post-rendering filter. It has no more awareness of the internal lighting or geometry than decades old SSAO algorithms have.
 
Upvote
72 (72 / 0)
Post content hidden for low score. Show…

Fatesrider

Ars Legatus Legionis
25,280
Subscriptor
it doesn't matter anyway. I know I'm not buying 2x 5090's any time soon. Or any other GPU with the nvidia brand.
In a macroscopic view of computer gear, AMD is TYPICALLY less expensive.

And being one who's decisions are almost entirely based on getting the best value for my dollar, AMD is the credible choice - as long as they don't go full moron and jack their prices up, too (like everyone is doing).

Until the fad ends, I probably won't be building any new system or upgrading again. It's simple math. Too much money means no purchases at all. And I expect that's not unique to me.

As for DLSS 5, it actually hurts my eyes to look at. The issue I have with AI video is that the scene isn't static. I see the variations in the renderings frame to frame over time, where it kind of has a "bugs covering the walls and twitching around" vibe that's really hard to describe if you don't see it. To me, it's like uncanny valley levels of weird elevated to a physically painful level.

It just gives me headaches almost immediately.

So, no, not a look I enjoy no matter how desperate they are to find some way to monetize the monumental waste of funds and resources that went into building this abomination. I can't wait for the whole thing to crash and burn.

And if the economy keeps going the way it is with Trump's delusional gifts to Russia and China, the crash of the economy is VERY likely to take out AI, too, as investors flee to safer havens.
 
Upvote
39 (39 / 0)

Purpleivan

Ars Praetorian
440
Subscriptor++
I'm also morbidly curious as to how they plan to go from a 5090 dedicated solely to running this ...thing (in parallel with a second 5090 doing the initial rendering) to "runs in the spare memory and compute budget of a single card". A dedicated 5090 means that the model could be anywhere up to 32 GB in size/working space, and if they want to get it working on a sole 5080, that would mean maybe a 4x reduction in size (so, 8 GB for the model, 8 for rendering) plus a massive decrease in compute resources.

In other words, if you think the output is bad now...
I'm sure people using a single card wouldn't mind their games running at half speed, to allow AI to produce visuals the developers didn't want, to be displayed to players, instead of the ones that were intended.

I mean, who wouldn't want that?
 
Upvote
61 (64 / -3)

AdeptFelix

Wise, Aged Ars Veteran
111
Let alone the obviously-AI character filter, I don't even like the environment changes they showed either. Basically just crank the saturation and gamma curve, then remove fog or smoke. They might as well just showed something running the RGB color space side by side with what they use for demos on TVs in Best Buy.
 
Upvote
56 (57 / -1)
I find it funny that so much focus is on the snapchat beauty filter demos, when the FIFA demo was much more revealing: in a very, very short demo, there were multiple frames where the ball was completely disappeared into the crowd and the net and others where rather clear text on the player's uniform was completely smudged into oblivion. Weird beauty-filter bullshit aside, in the more fundamental ways that AI has always sucked, DLSS 5 definitely sucks.
 
Upvote
91 (91 / 0)

WXW

Ars Scholae Palatinae
1,161
It's worse than that. The explanation is either confused or lying because other sources have clarified that the inputs are the standard inputs available to DLSS--which means no geometry data, except the depth buffer. That means that DLSS5 truly is just a post-rendering filter. It has no more awareness of the internal lighting or geometry than decades old SSAO algorithms have.
I don't think it even uses the depth buffer, only the color.
 
Upvote
16 (16 / 0)

balthazarr

Ars Tribunus Angusticlavius
6,905
Subscriptor++
At the same time, Huang said DLSS 5 is decidedly separate that kind of “slop,” because it “is 3D conditioned, 3D guided.” The artists behind a game are still the ones creating the in-game structural geometry and textures that form the “ground truth structure” that DLSS 5 works from, Huang said. “And so every single frame, it enhances but it doesn’t change anything,” he said.

What a load of BS.
 
Upvote
42 (43 / -1)

Marlor_AU

Ars Tribunus Angusticlavius
7,734
Subscriptor
Because DLSS 5 is “open,” Huang said artists can train the model for the specific kind of look they want.

So artists are going to be re-training their own custom DLSS models to achieve their desired look? "If you don't like it, you can just create your own". I guess that will sell some extra GPUs.

In the future, Huang said artists will also be able to prompt DLSS 5 with examples or a description of a desired look—“I want it to be a toon shader,” for instance. And if visual artists want to use DLSS 5’s models “to generate the opposite of photoreal, yeah, it’ll do that too,” he said.

"In some ill-defined future, artists will possibly be able to control this thing"?

Artistic control seems like a key feature to me, so leaving that as a potential future enhancement seems like a curious choice. It's almost as if this technology isn't ready for prime-time and they're jumping the gun with the announcement in order to sell hype to the market.
 
Upvote
45 (46 / -1)
It's essentially an uncanny-er valley filter. That's likely to require even more overpriced GPU power than what we're using now. And Nvidia is deluded enough to wonder why people aren't ecxcited about it...

I'm sort of hoping they'll suffer some significant market devaluation from this massive misstep. Maybe they'll have to sell GPUs at something less than stratospheric pricing.

But it's probably false hope.
 
Upvote
24 (24 / 0)

CKHarwood

Smack-Fu Master, in training
37
This is such a silly kerfluffle. Things that want to stand out won't use it. Crap games will. Spend dollars accordingly and it will work out just fine.

This reliance on the invisible hand of the market hasn’t been serving us for the last several decades. That theory presumes absolute transparency in the marketplace, little to no intellectual property protection, and a flatter distribution of wealth curve than we currently have today. Absent those conditions, the metaphorical hand doesn’t function.
 
Upvote
71 (73 / -2)
Post content hidden for low score. Show…

DeeplyUnconcerned

Ars Scholae Palatinae
1,127
Subscriptor++
Just explain, in a reasonable amount of detail, how it actually works, and then we can decide on a fair basis if we’re individually/collectively OK with it. What specific controls do the artists have? What does “generative control at the geometry level” mean in practice? “No trust me it’s good actually” won’t change minds.
 
Upvote
21 (21 / 0)
It's essentially an uncanny-er valley filter. That's likely to require even more overpriced GPU power than what we're using now. And Nvidia is deluded enough to wonder why people aren't ecxcited about it...
Nvidia were doing so well by selling hardware to the companies burning money on gen AI.

Why do they want to jump into that money pit themselves ?
 
Upvote
13 (13 / 0)

Anton Longshot

Ars Praetorian
891
Subscriptor
I wonder if Jensen's secretary plans all his interviews in locations that give him an excuse to wear that fucking black leather jacket!
The faux biker jacket makes him look manly.
To people who never met actual bikers.
I've always suspected Jensen's one of those people.
 
Upvote
-2 (5 / -7)

shodanbo

Wise, Aged Ars Veteran
107
I'm also morbidly curious as to how they plan to go from a 5090 dedicated solely to running this ...thing (in parallel with a second 5090 doing the initial rendering) to "runs in the spare memory and compute budget of a single card". A dedicated 5090 means that the model could be anywhere up to 32 GB in size/working space, and if they want to get it working on a sole 5080, that would mean maybe a 4x reduction in size (so, 8 GB for the model, 8 for rendering) plus a massive decrease in compute resources.

In other words, if you think the output is bad now...
Just more selling the dream and shipping the nightmare.

In the land of AI they call this "Tuesday"
 
Upvote
25 (26 / -1)
Post content hidden for low score. Show…

WXW

Ars Scholae Palatinae
1,161
Just explain, in a reasonable amount of detail, how it actually works, and then we can decide on a fair basis if we’re individually/collectively OK with it.
He can't, because if he explains it with detail the lies will be much more noticeable, so he just can twist the meaning of words to make people think this is something that isn't.

What specific controls do the artists have?
Intensity of the filter (unclear if for the full frame), color grading and masking.

What does “generative control at the geometry level” mean in practice?
Nothing, those are just words put together to sound like they mean something.
 
Upvote
41 (42 / -1)

AliSard

Wise, Aged Ars Veteran
136
Subscriptor
This reliance on the invisible hand of the market hasn’t been serving us for the last several decades. That theory presumes absolute transparency in the marketplace, little to no intellectual property protection, and a flatter distribution of wealth curve than we currently have today. Absent those conditions, the metaphorical hand doesn’t function.
Actually the invisible hand definitely functions. It punches you right in the meat-and-two-veg.
 
Upvote
23 (24 / -1)
As I've said before, on a technical level, yes, it's great the model is being fed 3D positional data of the elements, but at the end of the day, it's still taking that combined with a 2D output of the final frame and then adding inherently unpredictable changes to the image from it's training set. That's why it doesn't look so great. Now, part of me is curious how this would look if designers could train their own models off their own concept art, bespoke for each game let's say, and the tech could swap in those training sets distributed per game. I base that on the in-house example they started this whole pitch with, and how it actually looked pretty close to the original rendering. Yes, PART of me wants to see that just to give the tech it's best steel man case.

But the reality is, what we got is a bunch of developers given an on/off switch for it, who showed the game first with all the visual detail turned low as it could go, then just with it "on" replacing all the artistry with the old early 2000's art style of cyber dragon ridden by scantily clad mecha elf stuff they were putting on the boxes all the graphics cards came in at the time. Ultimately, this is an entire extra GPU's worth of rending to, at best, shift the art SIDEWAYS, such that it isn't at all clear that they've improved anything for all their effort. And, the decision to show the lowest graphical settings side by side backfired HEAVILY when a lot of people actually prefer the game with all the graphics turned down in most cases to what the AI model is doing.
 
Upvote
17 (18 / -1)
Post content hidden for low score. Show…