Nvidia's GTX Titan V supports more advanced DirectX 12 features than Pascal

Bottleneck is a legitimate term when it comes to code/engine optimisation. Anyway, what you've said is not correct. All DX12 does is remove the hand-holding from the driver and passes that onto the game engine. Because the game engine knows it's own data better than the driver, the driver had a lot of overhead where it could not assume, the game engine can assume and so can more efficiently submit draws.

For example, in DX11 you had to set blend, depth/stencil, vertex buffer, shaders, etc, one by one and the driver had to do things under the hood each time for validation/hazard tracking. Now in DX12 we just set a PSO (pipeline state object) which contains all data needed for a draw. Passing all this data in one go allows for a lighter-weight driver. The same goes for resource tracking, instead of the driver constantly doing it, the game engine can do less of it more effectively as it understands its own data, unlike the driver. You can see here how the CPU overhead is reduced from Direct3D however, you can also see how it passes a lot of responsibility onto the engine, a bad implementation from the engine side could be even worse than DX11.

I didn't say if it was correct or not fella. I was not implying truth to any of it, it is just what we were told as gamers. Marketing BS as usual.

I also know that sometimes the term "bottle neck" is used correctly, but for the most part it's just muppets who don't understand what it actually means without understanding what it means in the literal sense. Kinda like the term "rig" and so on. All just stupid phrases people come up with to make them sound like they know what they are talking about.
 
I didn't say if it was correct or not fella. I was not implying truth to any of it, it is just what we were told as gamers. Marketing BS as usual.

I also know that sometimes the term "bottle neck" is used correctly, but for the most part it's just muppets who don't understand what it actually means without understanding what it means in the literal sense. Kinda like the term "rig" and so on. All just stupid phrases people come up with to make them sound like they know what they are talking about.

Ah ok, well I'm just relaying some facts to on the marketing BS.
Yeah I do agree that the term is used in ignorance a lot.
 
One thing I learned a long time ago about CPU bottlenecks in DX11 is more often than not is it is more about bragging rights and willy waving than anything else. Even using 4 GPUs it is not really needed for me to OC any higher than 4.0ghz on any of the intel CPUs I have used from quad cores upwards. What I mean by this is @2160p you only need to maintain 60fps which is a doddle for any modern CPU, if you have the GPU grunt to go higher just turn up the settings to max which often also has the bonus of flattening the min/max fps for more consistent framerates.

I know you can get higher refresh monitors for lower resolutions but even then a CPU OC of about 4.4ghz is enough.

There is no room or point for DX12 in any of the above and the only thing it does have a use for is supporting older CPUs that AMD no longer make.

If we had always had DX12 and then an API with all the properties of DX11 came along people would be saying wow, better mGPU support, higher fps, able to use older GPUs in 2 way mGPU, better driver support from the vendors and the list goes on.

What happens in the future when we get things like 240 htz monitors but the hardware can not support it because lack of mGPU from the game devs is holding it back, does Microsoft come running to the rescue with Windows 11 and DX11 repackaged as the answer to the problem.

The only one who seems to have really benefited from DX12 is Microsoft who have sold a lot of copies of Win 10 on the back of it.
 
Well ironically I was reading an article in a PC mag on the crapper the other night which is titled "AMD's Ryzen is here, but the CPU is no longer the important factor".

TBH even a FX 8 can put up a reasonable representation of itself with high enough clock speed. More than enough to run a game properly. But Kaap makes a good point, it's all about frames you can't detect and results of a benchmark really. However, CPU bragging is kinda dying out now. Every one can afford a reasonable CPU now, so it's no fun for the guys buying 8 core CPUs to brag with now (like the 5960x).

What I would like to see going into the future is the CPU being completely untouched (like nothing) and a GPU being capable of running a game well on its own. I mean, that's the whole point of multiple cored GPUs surely? if AMD can whack out four cheap small Polaris cores and "Glue" them together (without having to explain IF, what it does, how it works and how it's not "glued" at all) to make a frickin beast.

And I know Nvidia are doing the same. Why not? if you can make four tiny cores cheaper and easier than one the same size why not right?

Problem is if you had four 1070 "type" cores on a GPU the CPU would die lol.

And Kaap - yeah, Microsoft sure pulled a doozy on this one. I switched over immediately because of the hype and all I have had so far is three completely stinky broken OSes I have had to completely reinstall from the ground up with a later version.
 
There are a plethora of benefits to DX12 besides the performance benefits, DX11 really was a hot mess with far too much reliance on legacy systems that no longer had any relevance to modern hardware. DX12/Vulkan might require more work to get working initially given it's such a shift compared to the last two decades worth of APIs, but it relies far less on messy hacks to get the job done(It just turns out that messy hacks can sometimes perform really well, though they often destroy any chance of maintainability).

It means mGPU support, when it comes, will be considerably better implemented, with Microsoft well on the way to creating mGPU extensions and examples for DX12 to ease developer implementation. Rather than relying on messy and hacky AFR, which gives absolutely no benefits in terms of frame latency over using a single GPU, games will finally be able to use mGPU setups to actually improve performance, rather than creating what is more of an illusion of increased performance that AFR(SLI/CFX) creates. Of course, not using AFR also means GPUs no longer have to be identical, making APU+GPU combos viable(Another big benefit for AMD).

One of the key benefits on the CPU side is that it doesn't dump the whole load on a single core, combined with the reduced driver overhead this often means that even when there isn't a clear improvement in FPS, frame latency can often sees solid benefits.

It's not as obvious on high performance systems but the efficiency of well implemented DX12/Vulkan against DX11 is worlds apart.

If you want to see Vulkan/DX12 done well, look at emulator implementations- These benefit greatly from the lower latencies due to the necessary overheads of converting API calls that would traditionally have to take place on top of the driver latency, DX12/Vulkan effectively half's the number of stages required to render something.

Also it doesn't really make sense to have a system with no reliance on the CPU for its whole graphics stack, a lot of graphics code will always rely on control logic with plenty of branching- Something GPUs are inherently bad at. Unless we start throwing small CPU cores onto GPUs then for now it just makes more sense to let the CPU do the graphics stuff the CPU is good at and let the GPU do the rest. DX12/Vulkan doesn't really put more load on the GPU it just allows(Note: Not forces) developers to take a lot of the work the API would traditionally do in realtime on the CPU and scraps it entirely by making specific code paths for a given set of GPU hardware that's essentially "pre-decoded" for that architecture. However, generic code paths still get implemented in game engines for compatibility reasons, hence why we aren't always seeing clear benefits from current implantations, and why the benefits can vary greatly between hardware.

Basically- Theres no chance of us going back to DX11 style APIs because a sensible implementation of DX12(which we'll be seeing far more of now MS are rolling out more libraries for DX12) already covers everything DX11 could do, but gives developers the option to also do far more.
 
Last edited:
Back
Top