The issue Nvidia have is that the advantage of deep learning is that the algo should be calculated elsewhere, offline and NOT in real-time, ready to be dropped into much lighter hardware.
If DLSS was designed correctly, it should be a case that patches significantly improve performance on the same GPU, but if it's hardware constrained, then Nvidia merely have given the customer another sales reason as to why they need to upgrade their GPU every ~12 months.
Would be cool if Nvidia released a behind the scenes video showing us how the deep learning is done on their super computers and then how it's ported over to consumer grade hardware.
Would be cool if Nvidia released a behind the scenes video showing us how the deep learning is done on their super computers and then how it's ported over to consumer grade hardware.
Dr Pound has some great videos on neural networks, particularly in relation to image manipulation, if you wanted a light overview on the tech at use here, but essentially the bit on the supercomputer is called a training algorithm, the bit on your GPU is an inference algorithm, training is the process of adjusting the weights(Thrown an image at the bottom, weights are the "strength" of the links between the circles) to get your desired output value from a typical input value(The algorithm itself adjusting the network will be Error Back Propagation here), inference is just the process of putting an input in your adjusted network of weights to see what the output is, there's no algorithm at all per-se, it's just a load of adding and multiplying weights in a simple manner across the network, the mix of weights(a number that dictates the output of a single path on the network) and how they link to each other dictates the functionality(This is roughly how neurons work in your brain, fundamentally quite an abstract and analogue process so still more the domain of electrical engineering than computer science atm, not too popular amongst traditional programmers as the maths is closer to semiconductor physics stuff so a lack of coding experience is no real barrier):
A lot of information can be gleaned from even the simplest of glances.
I remember Microsoft showing a render of their Xbox One X SOC and from that we knew the approximate die size, memory configuration and other notable details. There is a reason why Xbox hasn't given us a similar glance this time.
When Nvidia is in the lead, they have little reason to risk extra information getting out. I remember certain architectural enhancements of Maxwell not being spoken about by Nvidia until years afterwards, just so that their competition didn't get a hint at their secred sauce.