this post was submitted on 17 Mar 2026
42 points (95.7% liked)

PC Gaming

14218 readers
1717 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments, within reason.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 2 years ago
MODERATORS
 

Uh huh...

you are viewing a single comment's thread
view the rest of the comments
[–] wizardbeard@lemmy.dbzer0.com 7 points 6 hours ago* (last edited 6 hours ago) (1 children)

I've not been an asshole here, you've consistently talked down at everyone calling this slop due to some minor technicality in terminology that you've still failed to back up or expand on beyond linking to the same video a second time.

You also have really zeroed in on some claims that I've literally never heard anyone make:

It is not changing geometry. It is changing lighting. It is changing material properties.

No one has said shit about geometry, lighting, or materials because that is not the level at which DLSS operates. Both in previous versions and in this latest version.

It's not what anyone thinks is going on here, and it calls into question your own understanding of all this that you've now insisted upon it twice. It's not making lighting and materials changes. You're confusing raytracing which is often turned on and off in graphics presets alongside DLSS because of the intense resource usage, but it is not part of DLSS. Go download a mod for finer grained graphics settings controls in Cyberpunk 2077 and that much will be made clear.

There are plenty of tools people can use to get an idea of how any games' rendering pipline works, such as Special K as shouted out by the video you linked. Personally I like Reshade for getting a look at render passes, output targets, buffers etc.

DLSS operates on a completed "flat" render output/buffer. As far as I'm aware, It has no knowledge of geometry, materials, or shaders unless the devs are really doing wacky shit and have direct line to nvidia devs. Maybe they're passing it the depth and normal buffers as well as the flat render output. That opens a lot of options (see marty's RTGI shader) but is demonstrably still just working with slightly more than gets slapped on the screen as a flat raster image.

It can do edge detection as movement detection through comparing a number of the previous input frames using the types of techniques used in video compression to detect and handle movement, as the end of your video makes small mention of.

Usually it's used on the output of the 3d render pipeline before the flat HUD elements are slapped on top. Apparently a lot of games the guy that made the video tested didn't seperate out the HUD layer, or maybe it had something to do with his previous methodology. I'm not watching multiple of his videos to check, and I find it kind of hilarious that someone would think they were some voice of knowledge on how this stuff works if they put the kind of effort they indicated they had for their previous videos without using Special K.


I had already watched the video you linked. I've now watched it twice to ensure I didn't miss anything.

It's some guy playing with the features in Special K that allow you to utilize DLSS at arbitrary upscalong ratios while allowing HUD elements to render at the viewport resolution. It has nothing to do with the underlying tech or how DLSS works beyond showing that the defaults in most games could be better tuned.

He has a short bit talking about older anti-aliasing tech, then says that DLSS is an advancement without actually getting into how it works.

In all 18 minutes, there is hardly 60 seconds discussing the actual tech, and it literally uses the term generation.

So to be clear, since you seem to be highly mistaken about this: DLSS uses image generation technology along with some very fancy edge detection to attempt to fill in gaps and generate extra details that are not present in the original image.

It is not rendering only the needed sections at higher resolution or anything along those lines, but I can see how someone may think that was implied by your video.


So again, now that I hopefully have shown you that I do in fact know more than a decent bit about how DLSS works, and you still have not provided more to back up your point beyond a video of some guy fucking around with Special K and going "whoa cool"...

What part of DLSS generating image data that does not exist in a lower resolution source image and using it to fill in what would otherwise be repeated pixels in a traditionally upscaled (nearest neighbor, bilinear, trilinear, etc) image... how is that not generative?


Edit:

Would it kill you to not double the length of your goddamn comment after posting it?

I've got better things to do at this point than continue this, but at a glance I see that you took Nvidia's news post's wording as gospel.

Edit again:

It's clear now, you got hung up on some misleading marketing wording in one of the headlines. You even admit it uses AI to generate additional image data. Stop being condescending.