Nvidia's GTX Titan V supports more advanced DirectX 12 features than Pascal

Common knowledge on DX12 tbh. That is why Nvidia made Volta, because if the correct support is there Vega does very well. They played it perfectly IMO, even though I hate to admit it. AMD are all panicking making tanks years in advance and Nvidia just kept making gaming cards until it mattered.

This is why I know we will see Volta in some form at some point.
 
AMD had to focus on good Dx12/Vk support since at least that allowed them to sell GCN(Vega included) as a future proof chip.
 
Wonder why ROTTR runs better in DX11 than it does DX12 on the Titan V.

It is fine including better DX12 support in GPUs but what if DX12 games don't really fully support it.
 
Wonder why ROTTR runs better in DX11 than it does DX12 on the Titan V.

It is fine including better DX12 support in GPUs but what if DX12 games don't really fully support it.

A lot of DirectX 12 features are optional. So if a GPU doesn't support it the feature will not be used.

This is a double-edged sword, as games under DirectX 12 are not made with all of the features in mind, as it needs to work both with and without them. Development time is not infinite, so work is mostly done on what will definitely be used. Inevitably this means that a lot of features are often ignored.
 
A lot of DirectX 12 features are optional. So if a GPU doesn't support it the feature will not be used.

This is a double-edged sword, as games under DirectX 12 are not made with all of the features in mind, as it needs to work both with and without them. Development time is not infinite, so work is mostly done on what will definitely be used. Inevitably this means that a lot of features are often ignored.

Yep pretty much. That is also why Async Compute is the most common feature. It benefits one architecture and does nothing to a different one.
 
Wonder why ROTTR runs better in DX11 than it does DX12 on the Titan V.

It is fine including better DX12 support in GPUs but what if DX12 games don't really fully support it.

Because DX12 support in ROTTR is s**t. Basically. Botched in after the fact to gain sales. Probably about the poorest demonstration of DX12 available.

AMD had to focus on good Dx12/Vk support since at least that allowed them to sell GCN(Vega included) as a future proof chip.

Is complete nonsense. Not what you are saying, but the idea that a Fury X for example is future proof. It isn't, it never saw its potential and now it doesn't have enough VRAM.

Most of what AMD have been making for the last few years have been experiments and we have been the lab rats.

It was all a futile waste of time and money, IMO.
 
Last edited:
Here is a fun example of games not really supporting DX12 as well as we think.

mGPU AOTS DX12

2 Pascal Titans (2016) @2050/2772
6950X @4.4
2160p

2V5GzNF.jpg







2 Volta Titans @1980/1066
7980XE @4.8
2160p

1MXKIAQ.jpg





Probably would have got a slightly better Volta score if I had used them on the 6950X as it is more efficient than my 7980XE for mGPU.


The Volta Titans win by a nice margin but if AOTS was really as good at DX12 support the margin should have been a lot wider. This can be more clearly seen @2160p in the Superposition bench where a Volta Titan is more than twice as fast as a 1080 Ti.
 
Is complete nonsense. Not what you are saying, but the idea that a Fury X for example is future proof. It isn't, it never saw its potential and now it doesn't have enough VRAM.
I never said the claim held water. Nonetheless, the marketing emphasizes longevity, and that also motivated many buyers.
 
I never said the claim held water. Nonetheless, the marketing emphasizes longevity, and that also motivated many buyers.

I said on day one 4gb was not enough despite what the marketing people were saying about HBM.

The clockspeed of the HBM1 was never going to be enough for 1080p gaming either and really throttled the card.

HBM does have its uses but not at its best on gaming cards. Even on a Volta Titan which has a lot of memory bandwidth you still get noticeable performance gains by overclocking the HBM2 memory to 1066mhz.
 
I said on day one 4gb was not enough despite what the marketing people were saying about HBM.

Yup. All sorts of crap being said about how it didn't matter because of how fast HBM was. Erm, excuse me but that is fragrant BS as we know that you can't fit a size 10 foot in a size six shoe, FFS.

What totally wanks is how if it had 6gb or a bit more (maybe 8) it would still be a very good gaming card.

Looz - I know you didn't mate was all their marketing BS..

As for Ashes? yeah, not exactly DX12 done right either. I must admit when DX11 came out I scoffed at it (mostly because the biggest demonstrator of it was Dirt 2, and you could hack that to use DX10 to produce nearly all of the effects) but it did eventually come good. It seems to just be taking so much longer with DX12.

Problem (IMO, but with some fact) is that games all have a long development cycle. For example Fallout 4.. That took years. If you "do a Duke" and start all over again you lose your business, so it will be a while before they are all cleared out and gone and we start seeing DX12 titles being built from the metal up.

But we need mGPU to become a thing before Navi happens. Or once again AMD will be selling us a guess and a bet on what might happen (see also FX 8 series. IPC of the I7 920 but too many cores to work properly with anything).
 
The AOTS vs any other game's DX12 support comparison may not be correct. AOTS for example uses object space lighting, not a technique not very common and therefore will probably exhibit different performance bottlenecks to a more traditional shading style.
 
I think you guys are missing the point that the vast majority of the performance benefits from DX12/Vulkan and the reason it was rolled out are a result of reducing CPU load and distributing it across many cores rather than increasing GPU throughput. Afterall, AMD's push for DX12/Vulkan was at a time when their highest end part was an 8-core with weak single threaded performance that they had no intention of updating, while they were throwing 8 notebook cores in their game console SoCs.
 
I think you guys are missing the point that the vast majority of the performance benefits from DX12/Vulkan and the reason it was rolled out are a result of reducing CPU load and distributing it across many cores rather than increasing GPU throughput.

While what you are primarily saying is true, the part I highlighted can be seen as contradicting here. If a GPU is stalled due to CPU submission, and the cost of CPU submission decreases due to DX12 changes, the GPU throughput will naturally increase.
 
While what you are primarily saying is true, the part I highlighted can be seen as contradicting here. If a GPU is stalled due to CPU submission, and the cost of CPU submission decreases due to DX12 changes, the GPU throughput will naturally increase.

I was told the opposite. That DX12 lowered CPU use and loaded it all onto the GPU. That is what I was expecting. And that, really, is what it needs to do with GPUs now easily able to make CPUs cry.
 
I really do think that there are still no games that use DX12 properly or anywhere close for that matter.

I know people don't like synthetic benchmarks but they are very useful testing for what they are about. Cyan Room is a proper DX12 benchmark and when used with a compatable card like Vega 64 or the Volta Titan gives very good results compared to older DX11 cards. This sort of performance increase is not seen when running DX12 games which hints that the API is not being used very well by the game devs.
 
Yeah I think we just need to wait for the 3 year dev cycle to end, then we should see games starting to roll out full DX12 "to the metal" so to speak.
 
I was told the opposite. That DX12 lowered CPU use and loaded it all onto the GPU. That is what I was expecting. And that, really, is what it needs to do with GPUs now easily able to make CPUs cry.

I don't understand what you mean?

The cost for CPU submission is decreased under DX12 as the driver does less.
 
I don't understand what you mean?

The cost for CPU submission is decreased under DX12 as the driver does less.

The main talking point of DX12 was it becoming more GPU dependent, lowering "bottlenecks". Note for those who may be reading this that I again have used " because we all know that the term bottle neck is about as meaningful as "console port" so I am being sarcastic.

But yes, where as DX11 relied upon both your CPU and GPU and you really needed a super fast CPU to make a difference DX12 was supposed to lower that reliability using more of the GPU only making the strain on the CPU less. So that basically you could use a lowly quad core CPU at a reasonable clock speed with something like a 1080 and suffer no ills. Or, basically, higher GPU usage and not the 60-70% we were used to.

That, and the ability to allow people to use more than one GPU without need for support, drivers, etc etc were the two big selling points of DX12.

What I mean is, in the past with DX10 we were sold lighting etc and DX11 we were sold tessellation as the main talking point DX12 was going to be this whole different animal that made games run better, rely on the CPU less (as we all know that even a fast CPU these days is out paced by a moderate GPU) and so on. No visual improvements to speak of, just much MUCH better performance.

And so far? it's all been a total bust.
 
The main talking point of DX12 was it becoming more GPU dependent, lowering "bottlenecks". Note for those who may be reading this that I again have used " because we all know that the term bottle neck is about as meaningful as "console port" so I am being sarcastic.

But yes, where as DX11 relied upon both your CPU and GPU and you really needed a super fast CPU to make a difference DX12 was supposed to lower that reliability using more of the GPU only making the strain on the CPU less. So that basically you could use a lowly quad core CPU at a reasonable clock speed with something like a 1080 and suffer no ills. Or, basically, higher GPU usage and not the 60-70% we were used to.
.


Bottleneck is a legitimate term when it comes to code/engine optimisation. Anyway, what you've said is not correct. All DX12 does is remove the hand-holding from the driver and passes that onto the game engine. Because the game engine knows it's own data better than the driver, the driver had a lot of overhead where it could not assume, the game engine can assume and so can more efficiently submit draws.

For example, in DX11 you had to set blend, depth/stencil, vertex buffer, shaders, etc, one by one and the driver had to do things under the hood each time for validation/hazard tracking. Now in DX12 we just set a PSO (pipeline state object) which contains all data needed for a draw. Passing all this data in one go allows for a lighter-weight driver. The same goes for resource tracking, instead of the driver constantly doing it, the game engine can do less of it more effectively as it understands its own data, unlike the driver. You can see here how the CPU overhead is reduced from Direct3D however, you can also see how it passes a lot of responsibility onto the engine, a bad implementation from the engine side could be even worse than DX11.
 
Back
Top