AMD Explains Asynchronous Shaders on DirectX 12

haha was only a matter of time before they started flexing their DX12 muscles.

Funny how many times I heard them say "Wait for DX12" and I ignored them.
 
Glad amd have a jump on DX12, they get so much sh*t for nothing as it is

Apparently they started looking at HBM some ten years ago. AMD love to come up with new ideas, sadly though sometimes an idea can be too good.

You know? you're about to sit down and play a game of Monopoly and it's been the same way for donkey's years and AMD say "if we add condos and extra board spaces it will make the game better!"

Sometimes though people don't want it to be better.

Nvidia learned some pretty harsh lessons with Fermi. It cost them a fortune, it was about as big as a kitchen sink and we were all still firmly stuck on DX9/10.

Since then they have made smaller dies that can be clocked higher (IE keeping it all very simple) and AMD have been making Fermi dies.

Thing is? this time AMD could finally, actually have something worth the while. Right now we are all waiting on DX12 with baited breath. What if AMD actually got it right this time and the wait was actually worth it?

They did it before with 64 bit CPUs, forcing Intel into the same way of thinking. You watch, when Pascal releases with HBM and better DX12 peformance Nvidia will go on like they invented it :rolleyes:
 
Apparently they started looking at HBM some ten years ago. AMD love to come up with new ideas, sadly though sometimes an idea can be too good.

You know? you're about to sit down and play a game of Monopoly and it's been the same way for donkey's years and AMD say "if we add condos and extra board spaces it will make the game better!"

Sometimes though people don't want it to be better.

Nvidia learned some pretty harsh lessons with Fermi. It cost them a fortune, it was about as big as a kitchen sink and we were all still firmly stuck on DX9/10.

Since then they have made smaller dies that can be clocked higher (IE keeping it all very simple) and AMD have been making Fermi dies.

Thing is? this time AMD could finally, actually have something worth the while. Right now we are all waiting on DX12 with baited breath. What if AMD actually got it right this time and the wait was actually worth it?

They did it before with 64 bit CPUs, forcing Intel into the same way of thinking. You watch, when Pascal releases with HBM and better DX12 peformance Nvidia will go on like they invented it :rolleyes:

You know what they can invent? The HBM version of the 970 memory config. I would completely, absolutely laugh my arse off if that is what they do to gimp one of their cards.
 
Last edited:
Combine some of this better utilization with other techs like FRTC/Sync panels and that could really be potent for better performance or better efficiency as needed.
 
Suspicious

How? It's working in DX12 already...

So much so that in some cases you get an 80% increase to performance. Async Shaders are quite powerful. Each ACE on GCN can take on 16 commands(iirc, that or 8) at once. Fiji has 8 ACE's which means it can create 128 commands/schedule list when being used 100%. That's an insane amount. It's essentially 128 tasks being done at once per clock cycle.
 
How? It's working in DX12 already...

So much so that in some cases you get an 80% increase to performance. Async Shaders are quite powerful. Each ACE on GCN can take on 16 commands(iirc, that or 8) at once. Fiji has 8 ACE's which means it can create 128 commands/schedule list when being used 100%. That's an insane amount. It's essentially 128 tasks being done at once per clock cycle.

Sorry, that post was actually more in reference to the timing of the information rather than the information itself. Nothing serious.
 
Sorry, that post was actually more in reference to the timing of the information rather than the information itself. Nothing serious.

Ah no worries.
Now that you mention the timing of the info, I see your point:)
But it's correct information so in all honestly the green team should stop blaming others for their tech not being as ready as team red:p
 
You've got to love that instead of rising to the challenge, Nvidia chose to attempt to reduce performance metrics. It's crazy to think that any company tries such things when information, especially potentially inflammatory things like this, is more or less guaranteed to come out at some point or another.
 
Whilst AMD appear to have the jump over Nvidia on DX12, lets not kid ourselves, Nvidia's R&D team is substantially bigger than AMD's meaning they will catch up soon.

If I were AMD I'd be making HUGE noises now and get it all marketed..but then again..boils down to money which I'm led to believe isn't once what it used to be there :(
 
Whilst AMD appear to have the jump over Nvidia on DX12, lets not kid ourselves, Nvidia's R&D team is substantially bigger than AMD's meaning they will catch up soon.

If I were AMD I'd be making HUGE noises now and get it all marketed..but then again..boils down to money which I'm led to believe isn't once what it used to be there :(

We just gotta to our part as enthusiasts and not buy triple SLI setups. Then we can only pray that imformation to the general public doesn't get twisted in nVidia's favor, and that some of the countless nVidiots wise up. Even the playing field, so to speak.
 
Whilst AMD appear to have the jump over Nvidia on DX12, lets not kid ourselves, Nvidia's R&D team is substantially bigger than AMD's meaning they will catch up soon.

If I were AMD I'd be making HUGE noises now and get it all marketed..but then again..boils down to money which I'm led to believe isn't once what it used to be there :(

Driver's aren't the problem here for AMD. AMD on a hardware level supports Async Shaders. Nvidia do not. So the way Nvidia gets it's supoprt is through drivers, however it's not nearly as efficient and has it's limits. AMD doesn't have this issue. Your limited by the hardware and the devs have full access to it. So for AMD it's down to the devs more than themselves. Nvidia are just screwed. They better have it for Pascal otherwise they may just start to fall behind more. By now if Pascal on the design level for it's die doesn't have this capability, then they will probably end up losing a lot of marketing power since they won't compete well with AMD and AMDs next gen products. As of now Nvidia only hold the TX as a clear winner. That could easily change with limited drivers.
 
Let's not bring up whether or not nVidia can compete with AMD NBD ;)
Don't want the nVidiots to crawl out of their holes. So it needs to be whether or not AMD can compete with nVidia.

Either way, nVidia aren't going to be in much trouble. If any really. It doesn't matter how much of an edge AMD has considering how small it actually is at this time. Then you just have to factor in the influence of the nVidiots and voila.
 
Let's not bring up whether or not nVidia can compete with AMD NBD ;)
Don't want the nVidiots to crawl out of their holes. So it needs to be whether or not AMD can compete with nVidia.

Either way, nVidia aren't going to be in much trouble. If any really. It doesn't matter how much of an edge AMD has considering how small it actually is at this time. Then you just have to factor in the influence of the nVidiots and voila.

As of now AMD already surpass most Nvidia cards. The titan X is the only clear winner as of now. In DX12 the roles change, and it becomes whether or not Nvidia can keep up with AMD. And with the latest driver from AMD in DX12, it becomes more apparent that Nvidia need to try harder until they get their hardware more in line with Parallel Task Scheduling, which is why AMD leads in DX12.
 
As of now AMD already surpass most Nvidia cards. The titan X is the only clear winner as of now. In DX12 the roles change, and it becomes whether or not Nvidia can keep up with AMD. And with the latest driver from AMD in DX12, it becomes more apparent that Nvidia need to try harder until they get their hardware more in line with Parallel Task Scheduling, which is why AMD leads in DX12.

I was not disputing that fact, however unfortunately the fact is that only rational people like you and I etc will see that. I don't see nVidia losing much marketshare even with inferior hardware (assuming of course they fix this problem after Pascal at latest).

What I was trying to say is that AMDs current performance increase isn't enough to change much due to DX12 still making its way into games. The nVidiots are actually quite competent at finding anything and I mean anything that will prove beneficial to their cause. Whether or not it is significant or not (or even true) isn't going to do anything to get through to them.
 
Last edited:
I was not disputing that fact, however unfortunately the fact is that only rational people like you and I etc will see that. I don't see nVidia losing much marketshare even with inferior hardware (assuming of course they fix this problem after Pascal at latest).

What I was trying to say is that AMDs current performance increase isn't enough to change much due to DX12 still making its way into games. The nVidiots are actually quite competent at finding anything and I mean anything that will prove beneficial to their cause. Whether or not it is significant or not (or even true) isn't going to do anything to get through to them.

We'll just have to wait and see. Currently though, AMD has the edge in DX12. If they can maintain that throughout this year and all of next, they should start making a market share comeback. Hopefully their Zen proves successful as well.
 
Back
Top