Nvidia brings Integer Scaling to its drivers, but only for Turing

:D:D:D:D
It is as though Nvidia are so desperate to have something good they throw anything in.

It's because Intel said it was coming to their drivers with their 11th Gen graphics. It's a niche feature, but a welcome one.

Basically, Intel said it would happen to please Reddit, Nvidia saw that and them rushed it out to say "first". Intel knows the competition is coming, and they need to do what they can to appear to have the best software stack.

If you look at the rest of the announcement, Nvidia's other features are basically answering what AMD brought to the table with AMD anti-lag and Radeon Sharpening.
 
I don't think this uses any AI tech(It's just a simple nearest-neighbour algorithm rather than bi-linear interpolation), its Turing exclusivity will likely be because they only added dedicated integer units and with them a concurrent integer datapath alongside the floating point one with Turing(While int32 performance in Pascal was about 1/3rd rate fp32 at best). On NVidia's original Turing architecture deep dive article they state in relation to integer instructions
In previous generations, executing these instructions would have blocked floating-point instructions from issuing.
(So in a use case like this it would have to constantly context switch after each frame which had delays and some cache flushes on older NVidia archs, it's also possible they could use the Fused-Multiply-Add matrix instructions through the Tensor cores for an optimised k-nearest neighbour algorithm but that wouldn't be a requirement and would break GTX Turing compatibility)
 
Last edited:
Back
Top