Best technical explanation of DLSS imo

grec

New member
DLSS Best technical explanation imo

Computerphile always have great videos and Dr Mike Pound is excellent when it comes to neural networks, for people who want to dig a bit deeper I'd check some of their earlier NN videos.
 
Last edited by a moderator:
This actually raises the question that since it's a post processing operation performed on separate hardware, is the introduced input lag significant? Since surely you need to buffer the frames for the algorithm.
But I guess the buffer won't have a chance to build up, and ends up being the exact same as just rendering the game conventionally at the same frame rate.
 
This actually raises the question that since it's a post processing operation performed on separate hardware, is the introduced input lag significant? Since surely you need to buffer the frames for the algorithm.
But I guess the buffer won't have a chance to build up, and ends up being the exact same as just rendering the game conventionally at the same frame rate.

The difference wouldn't be that much. The framerate increases after all, so the delay added by post-processing will be mitigated mostly by the framerate boost. The base frames will take less time to render.

With TAA already being so popular, we have already seen input delays for games increase. The latest example where I have seen gamers complain about this is in the new Smash Bros, which has a heavy post-processing pipeline. Mostly Melee players how really enjoy how responsive that game is.
 
This actually raises the question that since it's a post processing operation performed on separate hardware, is the introduced input lag significant? Since surely you need to buffer the frames for the algorithm.
But I guess the buffer won't have a chance to build up, and ends up being the exact same as just rendering the game conventionally at the same frame rate.

Won't really add input lag moreso than other methods and you need to take into account once the framerates get to high the DLSS gets disabled. They don't specify when that is but they did say it's fast enough to process it within the 60fps/16.6ms window but at some point it's no longer fast enough and the hardware will no longer process it.
 
Yes, but you do need to render the scalable frame as well. If it's, say, 10ms for rendering the frame and then DLSS takes 15ms, you can still hit 60fps target utilising a queue since it's done on separate bits of hardware. But that example would just straight up add 15ms.


Or maybe I've done goofed since it's been a long day. :D
 
Yeah personally I do think it has to be a pipeline, and therefore add upto a frame of latency, otherwise the tensor or CUDA cores would be dark for roughly half the frame (IE sitting idle doing nothing but waiting for more data, which is a waste of time & energy) while the time allocated for each process would be a fraction of the frametime, and if they were in series rather than concurrent then high framerates wouldn't impact DLSS ability to work as frametimes would just increase, whereas with a pipelined approach they need the Tensor cores to complete before the next frame is rendered, which explains the switch around 60Hz.

Since it doesn't need a full frame to start work though presumably there's a fair bit of overlap so it rarely actually reaches a full frame of extra latency.
 
Last edited:
Back
Top