WYP
News Guru
Early reports have also stated that the Volta works better under DirectX 12 than Pascal.

Read more about Volta's improved DX12 feature support.

Read more about Volta's improved DX12 feature support.
Wonder why ROTTR runs better in DX11 than it does DX12 on the Titan V.
It is fine including better DX12 support in GPUs but what if DX12 games don't really fully support it.
A lot of DirectX 12 features are optional. So if a GPU doesn't support it the feature will not be used.
This is a double-edged sword, as games under DirectX 12 are not made with all of the features in mind, as it needs to work both with and without them. Development time is not infinite, so work is mostly done on what will definitely be used. Inevitably this means that a lot of features are often ignored.
Wonder why ROTTR runs better in DX11 than it does DX12 on the Titan V.
It is fine including better DX12 support in GPUs but what if DX12 games don't really fully support it.
AMD had to focus on good Dx12/Vk support since at least that allowed them to sell GCN(Vega included) as a future proof chip.
I never said the claim held water. Nonetheless, the marketing emphasizes longevity, and that also motivated many buyers.Is complete nonsense. Not what you are saying, but the idea that a Fury X for example is future proof. It isn't, it never saw its potential and now it doesn't have enough VRAM.
I never said the claim held water. Nonetheless, the marketing emphasizes longevity, and that also motivated many buyers.
I said on day one 4gb was not enough despite what the marketing people were saying about HBM.
I think you guys are missing the point that the vast majority of the performance benefits from DX12/Vulkan and the reason it was rolled out are a result of reducing CPU load and distributing it across many cores rather than increasing GPU throughput.
While what you are primarily saying is true, the part I highlighted can be seen as contradicting here. If a GPU is stalled due to CPU submission, and the cost of CPU submission decreases due to DX12 changes, the GPU throughput will naturally increase.
I was told the opposite. That DX12 lowered CPU use and loaded it all onto the GPU. That is what I was expecting. And that, really, is what it needs to do with GPUs now easily able to make CPUs cry.
I don't understand what you mean?
The cost for CPU submission is decreased under DX12 as the driver does less.
The main talking point of DX12 was it becoming more GPU dependent, lowering "bottlenecks". Note for those who may be reading this that I again have used " because we all know that the term bottle neck is about as meaningful as "console port" so I am being sarcastic.
But yes, where as DX11 relied upon both your CPU and GPU and you really needed a super fast CPU to make a difference DX12 was supposed to lower that reliability using more of the GPU only making the strain on the CPU less. So that basically you could use a lowly quad core CPU at a reasonable clock speed with something like a 1080 and suffer no ills. Or, basically, higher GPU usage and not the 60-70% we were used to.
.