this post was submitted on 17 Mar 2026
63 points (94.4% liked)
PC Gaming
14233 readers
906 users here now
For PC gaming news and discussion. PCGamingWiki
Rules:
- Be Respectful.
- No Spam or Porn.
- No Advertising.
- No Memes.
- No Tech Support.
- No questions about buying/building computers.
- No game suggestions, friend requests, surveys, or begging.
- No Let's Plays, streams, highlight reels/montages, random videos or shorts.
- No off-topic posts/comments, within reason.
- Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Technically all upscaling replaces the frame with a higher resolution frame.
Even with non-AI upscaling, like linear or bicubic, the original frame isn't copied and then upscaled. The upscaled image is built based on the old image andreplaces the original frame in the frame buffer. DLSS doesn't alter the process, it just uses a neural network instead of a linear/bicubic algorithm.
The new difference with DLSS 5 seems to be that instead of using the frame as the only input it also takes in additional information from earlier in the rendering pipeline (motion vectors) prior to upscaling. This would theoretically create more accurate outputs.
It's kind of like how asking an LLM a question becomes more accurate if you first paste the Wikipedia article which answers your question into the context. Having more information allows for better output quality.
Based on the reporting the use of 2x 5090s in the demo was due to the VRAM requirements of the current iteration, it isn't due to a higher compute requirement. The official DLSS5 release will run on a single card (according to NVIDIA).
It's adding light sources and details that weren't there, which it can't possible keep consistent from one scene to the next.
For the light sources especially, it's removing shadows and adding light in ways that make no physical sense.
Using motion vectors and geometry data isn't new. Previous generations of DLSS as well as framegen were already doing that.
What's new here, is that they stopped inferring details, and started making them up.
The output will not be "more accurate". It can't be.
Even if this model doesn't implement the randomness of other AI tech and remains deterministic, that still won't allow devs to accurately control output for the literally infinite number of potential scenes players can create in a game.
I get your point, I don't think it looks very good on the whole and I almost certainly won't use it.
However, the direction that they're going in inserting it earlier in the rendering chain seems a bit more promising than simply taking a low-res output and making it bigger.
I could easily see having the ability to add properties of materials/shaders which would exclude them from the process. An artist may not care too much about how the grass is enhanced they may want to disable it for parts of a character's model or set pieces in the world.
That kind of thing isn't really possible with DLSS as it stands now (and probably isn't possible with DLSS 5), but the idea of attacking the problem earlier in the rendering sequence is interesting.