It is amusing when we tell tech CEOs or whoever "We don't want that", and what they hear is "We will maybe want that once you make it better", but what they should be hearing is "The product/technology you are offering me is fundamentally designed to do a thing that I do not want to be done".
It's worse than that. The explanation is either confused or lying because other sources have clarified that the inputs are the standard inputs available to DLSS--which means no geometry data, except the depth buffer. That means that DLSS5 truly is just a post-rendering filter. It has no more awareness of the internal lighting or geometry than decades old SSAO algorithms have.If you're explaining, you're losing
In a macroscopic view of computer gear, AMD is TYPICALLY less expensive.it doesn't matter anyway. I know I'm not buying 2x 5090's any time soon. Or any other GPU with the nvidia brand.
I'm sure people using a single card wouldn't mind their games running at half speed, to allow AI to produce visuals the developers didn't want, to be displayed to players, instead of the ones that were intended.I'm also morbidly curious as to how they plan to go from a 5090 dedicated solely to running this ...thing (in parallel with a second 5090 doing the initial rendering) to "runs in the spare memory and compute budget of a single card". A dedicated 5090 means that the model could be anywhere up to 32 GB in size/working space, and if they want to get it working on a sole 5080, that would mean maybe a 4x reduction in size (so, 8 GB for the model, 8 for rendering) plus a massive decrease in compute resources.
In other words, if you think the output is bad now...
it enhances but it doesn’t change anything,” he said.
I don't think it even uses the depth buffer, only the color.It's worse than that. The explanation is either confused or lying because other sources have clarified that the inputs are the standard inputs available to DLSS--which means no geometry data, except the depth buffer. That means that DLSS5 truly is just a post-rendering filter. It has no more awareness of the internal lighting or geometry than decades old SSAO algorithms have.
At the same time, Huang said DLSS 5 is decidedly separate that kind of “slop,” because it “is 3D conditioned, 3D guided.” The artists behind a game are still the ones creating the in-game structural geometry and textures that form the “ground truth structure” that DLSS 5 works from, Huang said. “And so every single frame, it enhances but it doesn’t change anything,” he said.
Because DLSS 5 is “open,” Huang said artists can train the model for the specific kind of look they want.
In the future, Huang said artists will also be able to prompt DLSS 5 with examples or a description of a desired look—“I want it to be a toon shader,” for instance. And if visual artists want to use DLSS 5’s models “to generate the opposite of photoreal, yeah, it’ll do that too,” he said.
It's only 'Slop' if it comes from the Slop Valley region of California. Otherwise it's sparkling grey goo.translation: it's artisanal slop
This is such a silly kerfluffle. Things that want to stand out won't use it. Crap games will. Spend dollars accordingly and it will work out just fine.
Nvidia were doing so well by selling hardware to the companies burning money on gen AI.It's essentially an uncanny-er valley filter. That's likely to require even more overpriced GPU power than what we're using now. And Nvidia is deluded enough to wonder why people aren't ecxcited about it...
The faux biker jacket makes him look manly.I wonder if Jensen's secretary plans all his interviews in locations that give him an excuse to wear that fucking black leather jacket!
Just more selling the dream and shipping the nightmare.I'm also morbidly curious as to how they plan to go from a 5090 dedicated solely to running this ...thing (in parallel with a second 5090 doing the initial rendering) to "runs in the spare memory and compute budget of a single card". A dedicated 5090 means that the model could be anywhere up to 32 GB in size/working space, and if they want to get it working on a sole 5080, that would mean maybe a 4x reduction in size (so, 8 GB for the model, 8 for rendering) plus a massive decrease in compute resources.
In other words, if you think the output is bad now...
The faux biker jacket makes him look manly.
To people who never met actual bikers.
I've always suspected Jensen's one of those people.
He can't, because if he explains it with detail the lies will be much more noticeable, so he just can twist the meaning of words to make people think this is something that isn't.Just explain, in a reasonable amount of detail, how it actually works, and then we can decide on a fair basis if we’re individually/collectively OK with it.
Intensity of the filter (unclear if for the full frame), color grading and masking.What specific controls do the artists have?
Nothing, those are just words put together to sound like they mean something.What does “generative control at the geometry level” mean in practice?
Reassuring us that we can turn it off isn't the best sales pitch is it?Because if there's one thing Big Tech is known for--it is respecting user choice.![]()
Having a knack for surfing bubbles. Oh, and plenty of anti-competitive practices.I genuinely don't understand how this guy keeps his job.
Actually the invisible hand definitely functions. It punches you right in the meat-and-two-veg.This reliance on the invisible hand of the market hasn’t been serving us for the last several decades. That theory presumes absolute transparency in the marketplace, little to no intellectual property protection, and a flatter distribution of wealth curve than we currently have today. Absent those conditions, the metaphorical hand doesn’t function.
Getting high on their own supply.Nvidia were doing so well by selling hardware to the companies burning money on gen AI.
Why do they want to jump into that money pit themselves ?