It looks like Battlefield V is gaining support for Nvidia's performance-boosting DLSS

Still the sole, most exciting feature of the new family to me but also with zero implementation hitherto. Proves why I don't like to buy into new tech immediately :D promises, promises...
 
Still the sole, most exciting feature of the new family to me but also with zero implementation hitherto. Proves why I don't like to buy into new tech immediately :D promises, promises...


Yep, Personally would've thrown my money at Nvidia if the 2080 Ti was £699 like the 1080 Ti but at £1099 they can go jump, And this is coming from someone who's bought multiple Titans in the past.
 
Yep, Personally would've thrown my money at Nvidia if the 2080 Ti was £699 like the 1080 Ti but at £1099 they can go jump, And this is coming from someone who's bought multiple Titans in the past.

Was thinking this the other day when I could have amassed about £700 in funds by mid Jan. Then I remembered how much they cost, and how much it would cost to add a block and for what? to make a game I don't like that much have RT.

BFV looks stunning, even at 1080p on a Vega 64. The problem isn't the graphics it's the game. It's that tired, same old format.

After playing Fallout 76 for most of Xmas? yeah, BFV is stunning to look at. I just can't get into it even with the good actors and characters.
 
Yep, Personally would've thrown my money at Nvidia if the 2080 Ti was £699 like the 1080 Ti but at £1099 they can go jump, And this is coming from someone who's bought multiple Titans in the past.

Me too! Watched the keynote with money in hand but alas. Ti costs an abysmal €1400-1900 here... Non Ti around a 1000...
 
Titan V has Tensor cores for deep learning. Did you really think that RT cores were a thing?

They are just renamed Tensor cores dude. You know? ones that would be pretty much useless otherwise to some one who plays games.

From Kepler on what you have been gaming on has been seriously cut down GPUs. Fermi was expensive from a manufacturing standpoint because it had great big honking dies useless for gaming.
 
It's no news that other architectures can run the RT software, and the Titan V has most of the important hardware used in the RT pipeline already(Tensor cores are critical, as are the improved mixed precision instructions amongst many other things), the famous Star Wars demo demonstrated that Pascal cards and Volta can certainly run the calculations required, just around x4 slower minimum for Volta, with the lack of Tensor cores cutting Pascal's performance down to almost half Volta's.

709b7b6c4c96ce6dcd198197b4947627.png


If it really is possible to hit 100fps on a Titan V now then it demonstrates the monumental improvements that have been made to the RT pipeline since developers were finally able to get their hands on it and apply it practically to anything beyond small test environments. It's worth remembering, in terms of pure compute performance the Titan V is a notably larger, more expensive & more powerful chip than any of Turing's offerings (Since it was never intended for consumers), and is the card initial RTX offerings were technically originally made on(The only real developer card pre 2000 series).


the titan V has no RTX cores. so it seems it reaches that performance just with ALUs.
Titan V has Tensor cores(Used for de-noising the ray traced image significantly faster), and many specialised units/instructions that went into Turing's shader.


EDIT---
They are just renamed Tensor cores dude. You know? ones that would be pretty much useless otherwise to some one who plays games.

Nahhhh, Tensor cores are useful for the RT pipeline but only perform denoising calculation on the post-RayTraced image and offer around a x1.5 speed up per CUDA tops(It essentially allows you to make a messier less complete RT image quick and dirty, and then clean it up rapidly after), the RT cores still offer around an x4 speed up in relevant calculations and don't perform anything remotely comparable to the same type of calculation. Essentially, Volta is far closer to Turing under the hood than Pascal was, even if most software doesn't demonstrate that, but the 1 RT core per SM added to Turing SM's is undeniably a performance impacting architecture change too when relevant.
 
Last edited:
RT cores are not Tensor cores. They should not be considered as the same thing.

As said above, early Ray Tracing demos used Tensor cores to de-noise Ray-traced images to reduce the number of rays needed to create a clear final image. RT cores and Tensor cores do completely different tasks.
 
Titan V has Tensor cores for deep learning. Did you really think that RT cores were a thing?.


you may need to get a few technological infos. :p


https://www.anandtech.com/show/13282/nvidia-turing-architecture-deep-dive/5


i find it very interesting that a titan with just tensor cores.
can keep up with a GPU that has dedicated raytracing hardware. if all this is true.



Titan V has Tensor cores(Used for de-noising the ray traced image significantly faster), and many specialised units/instructions that went into Turing's shader.



yes mr.obvious. :) the titan has indeed tensor cores.


but they are not specialised for BVH traversal and ray-triangle intersection testing.
to use them you need a different coding approach.

or the overlaying API is doing so much work that AMD should be able to compute (sorry for the phun) quiet well without RT cores.


you may need to read the article before making comments.
 
Last edited:
Yes, none of the NGX stack works on the Titan V's Tensor cores (Which is why there's no DLSS and such for that card), however neither does RTX usually, and given my fairly weak grasp of German (And the unreliability of machine translation tools) there was no way for us to know how exactly the original claim came about and what modifications or steps were taken to get the Titan V to run this software. The only public NVidia RTRT demo's using the Titan V in the past was using their OptiX Ray Tracing Engine, and was the demo mentioned in my post, which does support an extra layer of denoising using the Tensor cores beyond the simpler bilateral filters of most current RTX game engine implementations.
 
Back
Top