AMD used an Intel CPU in Project Quantum

Did you even read my post?

GPU performance is often limited by your CPU.
AMD's drivers do not support multi-threaded instructions with DX11, so single-core performance is the only thing which matters with AMD GPUs in current games.

Intel CPUs are significantly ahead of AMD CPUs when it comes to per-core performance.
So if they want to actually show off how fast their new Dual Fury X card is, they have to pair it with an Intel CPU to avoid bottlenecking it.

yes but seems as if you didn't read mine.. I already said the GPU is limited by the CPU. Why are you trying to prove a point that we all already know?:huh:
 
Some people... If this was nVidia and they had their own CPUs competing against AMD and Intel. Does anyone think they will actually use the Intel even if its better?? Stop complaining... Would you people rather AMD lock the CPU to their own and make the consumer suffer?
 
Some people... If this was nVidia and they had their own CPUs competing against AMD and Intel. Does anyone think they will actually use the Intel even if its better?? Stop complaining... Would you people rather AMD lock the CPU to their own and make the consumer suffer?

Uh what are you talking about? No one is complaining. All it is tbh is AMD using Intel CPUs because they are better. It's more of AMD realizing defeat on the major CPU front and have accepted this. It's not a bad thing, Zen is looking like a really big step up from AMD and tbh being behind and defeated for so long should give those engineers some incentive to come back and kick some butt. I like to think of it as a positive moral boost^_^
 
Uh what are you talking about? No one is complaining. All it is tbh is AMD using Intel CPUs because they are better. It's more of AMD realizing defeat on the major CPU front and have accepted this. It's not a bad thing, Zen is looking like a really big step up from AMD and tbh being behind and defeated for so long should give those engineers some incentive to come back and kick some butt. I like to think of it as a positive moral boost^_^

Do keep in mind that Intel is huge, and so is Nvidia. Both do their "own" thing, while AMD does 2 battles at once, both CPU battle with Intel and GPU battle with Nvidia.

So even though they use Intel, I don't see it being such a big problem anyway... Intel is clearly superior in the CPU market and overall, and I think we can all see that and agree on that front :)
 
yes but seems as if you didn't read mine.. I already said the GPU is limited by the CPU. Why are you trying to prove a point that we all already know?:huh:
Well first of all, I'm not sure why you keep saying that GPU discussion is irrelevant since this system was built to demo their new Dual-Fury X GPU.
The point is that the problem here is not just that one CPU is faster than the other.

Despite the fact that AMD's CPU architecture is designed for multi-threaded workloads, their GPU drivers don't support multi-threaded instructions in DX11 at all.
Compare that with NVIDIA where we see that they have the best DX11 performance when paired with an 8-core CPU that has Hyper-Threading disabled.

So because AMD's drivers--after DX11 being around for more than six years--don't support multi-threaded instructions, AMD has no choice but to use an Intel CPU.
If they had multi-threaded support in the drivers maybe they would have been better off pairing it with one of their own 8-core CPUs instead of a 4-core Intel chip?


Now the reality of the situation today is that it probably wouldn't matter if their drivers were multi-threaded or not, because Intel is so far ahead now that they will beat eight of AMD's cores with four of theirs, even in heavily multi-threaded applications, but it won't have done them any favours to have video drivers that don't support their own CPU architecture well at all, and held back GPU performance regardless of the CPU it was paired with.

It's no surprise that they have been focused on developing Mantle, since that really helps remove the bottleneck from the CPU, but I can't help wondering if they would have been better off focusing on their DX11 drivers first.

This will have been holding back the performance of their GPUs in DX11 titles regardless of the CPU it's paired with, but it also ensures that you have no option but to pair it with an Intel CPU if you want the best performance, because strong multi-threaded performance does nothing for their GPUs.
 
Well first of all, I'm not sure why you keep saying that GPU discussion is irrelevant since this system was built to demo their new Dual-Fury X GPU.

I never once brought up GPU until you did for some odd reason. And yes it's irrelevant as the thread is focused on the CPU side of things not the GPU.. Also again, I already said GPUs are limited by their CPUs after you started off on your GPU tangent.. so you keep repeating yourself when everyone here already knows the info you keep on saying. It's honestly funny you are trying so hard to prove something we already know^_^
 
Well first of all, I'm not sure why you keep saying that GPU discussion is irrelevant since this system was built to demo their new Dual-Fury X GPU.
The point is that the problem here is not just that one CPU is faster than the other.

Despite the fact that AMD's CPU architecture is designed for multi-threaded workloads, their GPU drivers don't support multi-threaded instructions in DX11 at all.
Compare that with NVIDIA where we see that they have the best DX11 performance when paired with an 8-core CPU that has Hyper-Threading disabled.

So because AMD's drivers--after DX11 being around for more than six years--don't support multi-threaded instructions, AMD has no choice but to use an Intel CPU.
If they had multi-threaded support in the drivers maybe they would have been better off pairing it with one of their own 8-core CPUs instead of a 4-core Intel chip?


Now the reality of the situation today is that it probably wouldn't matter if their drivers were multi-threaded or not, because Intel is so far ahead now that they will beat eight of AMD's cores with four of theirs, even in heavily multi-threaded applications, but it won't have done them any favours to have video drivers that don't support their own CPU architecture well at all, and held back GPU performance regardless of the CPU it was paired with.

It's no surprise that they have been focused on developing Mantle, since that really helps remove the bottleneck from the CPU, but I can't help wondering if they would have been better off focusing on their DX11 drivers first.

This will have been holding back the performance of their GPUs in DX11 titles regardless of the CPU it's paired with, but it also ensures that you have no option but to pair it with an Intel CPU if you want the best performance, because strong multi-threaded performance does nothing for their GPUs.

AMD don't support DX11 instructions? What instructions..? Secondly DX11 is single threaded at heart. You can only submit GPU commands from one thread. Where are you getting this info from?
 
AMD don't support DX11 instructions? What instructions..? Secondly DX11 is single threaded at heart. You can only submit GPU commands from one thread. Where are you getting this info from?
Check the results from the 3D Mark API test posted on the previous page:

NVIDIA's DX11 performance scales with the number of CPU cores.
AMD's DX11 performance does not scale at all. Multi-threaded DX11 test results are virtually identical to single-threaded DX11.

Not sure why you think DX11 is single-threaded: Introduction to Multithreading in Direct3D 11
 

Ok so it appears I got confused and misinterpreted your initial point. After re-reading it, when you referred to AMD drivers I thought you were talking about some sort of AMD CPU driver limitation.

However, I would like to point out that AMD may not be limited by their lack of MT support as achieving 2million draw calls per second is not necessarily anywhere near the number of draw calls required for a game to render. Over the years it's always been about batching and so instancing is used as much as possible. I can see where you're coming from though and in a synthetic API overhead benchmark it definitely proves your point but the lack of MT AMD perf may not be the bottleneck in many games out there, depending on their rendering architecture.

Regarding the DX11 API doc link. That graph compares number of draw calls and draw commands can only be played back on the immediate context. The immediate context is not thread-safe and can only be used by one thread. Reading from the docs, "You cannot play back two or more command lists simultaneously on the immediate context", this means that the GPU commands can only be issued in single threaded fashion.
 
Last edited:

Dx11 is single threaded at heart, don't know how you can disagree with that. This is why we have all the limitations we do now and the reason DX12/Mantle/Vulkan are very very big leaps forward in API development. Just because it's single threaded at heart doesn't mean it can't be multi-threaded however. It does mean it doesn't do it all that well which is still why the 1st core is always being used far far more than the others even in multi-threaded situations and also the reason DX12 uses them all much more efficiently and proportionally.

And regarding your links, in games you won't hit that limit of being limited from switching to ST to MT with the AMD cards.. or Nvidia for that matter, as SPS said himself.. That is a synthetic benchmark designed to hit these limits. Games won't hit those limits.
 
Dx11 is single threaded at heart, don't know how you can disagree with that. This is why we have all the limitations we do now and the reason DX12/Mantle/Vulkan are very very big leaps forward in API development. Just because it's single threaded at heart doesn't mean it can't be multi-threaded however. It does mean it doesn't do it all that well which is still why the 1st core is always being used far far more than the others even in multi-threaded situations and also the reason DX12 uses them all much more efficiently and proportionally.
NVIDIA can handle more than twice the number of draw calls when DX11 multi-threading is used.
Though the number is relatively low compared to DX12/Vulkan/Mantle, it's still a significant difference, and almost 3x more than AMD can handle.

That's despite the fact that the 290X actually shows numbers 30% higher than a 980 when using DX12.
So it's the faster card, but it's held back when running DX11 titles due to this lack of support for multi-threading in the drivers.

And regarding your links, in games you won't hit that limit of being limited from switching to ST to MT with the AMD cards.. or Nvidia for that matter, as SPS said himself.. That is a synthetic benchmark designed to hit these limits. Games won't hit those limits.
Not true at all. There are many, many games now where turning up settings such as the view distance or object distance will have GPU usage drop well below 100% while the framerate also takes a nosedive.

Any time that happens, you're being bottlenecked by the CPU, and it will happen far sooner with AMD cards due to their lack of multi-threading support when running DX11 titles.

Yes, at the core of it , the API is to blame - but AMD performance falls off a cliff far quicker than NVIDIA in those situations.
And it's not like Microsoft releasing DX12 fixes the problem.
Very few DX11 titles are going to be ported over to DX12 - it's a significant amount of work.

Those existing DX11 games which are hitting the limit of what the API can handle are always going to have that bottleneck.
And it becomes more of a problem as GPUs get faster and we see lower and lower GPU usage in those situations.

Sure, not all DX11 title are multi-threaded, but those which are will run far better on an NVIDIA card than an AMD one as a result.
 
Not true at all. There are many, many games now where turning up settings such as the view distance or object distance will have GPU usage drop well below 100% while the framerate also takes a nosedive.

Any time that happens, you're being bottlenecked by the CPU, and it will happen far sooner with AMD cards due to their lack of multi-threading support when running DX11 titles.

Yes, at the core of it , the API is to blame

This CPU usage has nothing to do with the API calls necessarily. When you increase the draw distance there's more to process on the CPU before you even start drawing; sorting, culling, LODing, generating shadow lists, etc.
 
I'll ignore everything else you said as I already stated why earlier and you've basically repeated yourself.

And it's not like Microsoft releasing DX12 fixes the problem.
Very few DX11 titles are going to be ported over to DX12 - it's a significant amount of work.

But this proper made me laugh. DX12 does fix this problem and many other issues currently with DX11... It's hardly a significant amount of work.. it's easier to port it over to DX12 than it is to Consoles. The Industry as a whole has started supporting this new API far faster than any other newer version of DX before. If it was a significant amount of work already, there wouldn't be any incentive for the Industry to move forward and will wait. Which isn't happening. Take Mantle for example. It took I believe it was 1 or 2 guys to go from DX11 to Mantle less than 6 weeks. Imagine a whole team of people doing it. Wouldn't take long at all.
 
I'll ignore everything else you said as I already stated why earlier and you've basically repeated yourself.



But this proper made me laugh. DX12 does fix this problem and many other issues currently with DX11... It's hardly a significant amount of work.. it's easier to port it over to DX12 than it is to Consoles. The Industry as a whole has started supporting this new API far faster than any other newer version of DX before. If it was a significant amount of work already, there wouldn't be any incentive for the Industry to move forward and will wait. Which isn't happening. Take Mantle for example. It took I believe it was 1 or 2 guys to go from DX11 to Mantle less than 6 weeks. Imagine a whole team of people doing it. Wouldn't take long at all.

I'd have to argue here and say it completely depends if the renderer was wrote DX12-friendly which I doubt most games are. DX12 is very different and is not a trivial port to utilise DX12 to its full potential. M$ actually said that a naive port would be slower than DX11.
 
I'd have to argue here and say it completely depends if the renderer was wrote DX12-friendly which I doubt most games are. DX12 is very different and is not a trivial port to utilise DX12 to its full potential. M$ actually said that a naive port would be slower than DX11.

Since most games aren't then, doesn't it make that team of 6 people(iirc) who who ported over to it in 6 weeks while improving performance slightly while also adding more details and characters just show that it can be done? It's possible to do so. Now for a massive AAA game it will take longer but with a bigger team of people working on it I don't think it will take that long. A month or two Or even three is pretty reasonable to port over.

As MS have said, DX12 is a little more difficult because it removes all the safe walls that DX11 had so it takes more care and attention to make sure devs don't get issues such as excessive battery consumption or very odd bugs and etc. That's probably the most time consuming process of making sure the code is written well. I think it would present more of a problem porting over to DX12 if the used the "full feature" DX version of Dx12.1. That would probably complicate things as it's just more crap to take into account
 
Since most games aren't then, doesn't it make that team of 6 people(iirc) who who ported over to it in 6 weeks while improving performance slightly while also adding more details and characters just show that it can be done? It's possible to do so. Now for a massive AAA game it will take longer but with a bigger team of people working on it I don't think it will take that long. A month or two Or even three is pretty reasonable to port over.

As MS have said, DX12 is a little more difficult because it removes all the safe walls that DX11 had so it takes more care and attention to make sure devs don't get issues such as excessive battery consumption or very odd bugs and etc. That's probably the most time consuming process of making sure the code is written well. I think it would present more of a problem porting over to DX12 if the used the "full feature" DX version of Dx12.1. That would probably complicate things as it's just more crap to take into account

A couple of months is probably more accurate yeah. More people doesn't necessarily equal quicker turn around of course. Yes self-synchronisation is one of the bigger changes, along with the way data is submitted to the GPU. DX12 requires you to set the entire pipeline state in one API call, good for efficient but different to DX11.

I think we've wanders way too far off topic now :p
 
A couple of months is probably more accurate yeah. More people doesn't necessarily equal quicker turn around of course. Yes self-synchronisation is one of the bigger changes, along with the way data is submitted to the GPU. DX12 requires you to set the entire pipeline state in one API call, good for efficient but different to DX11.

I think we've wanders way too far off topic now :p

Lol yeah probably:p
 
Back
Top