Oxide Developer says "NVIDIA was Putting Pressure on them" to change their DX12 Bench

https://www.youtube.com/watch?v=uwhblh6qLBg

what I can see in that video is only that the TX side is far smoother :lol:

It's because he was using Raptr for the Fury X which is buggy and generally crappy, I'm really not keen on it to be honest hence I invested in Action Mirillis, Wayyyyy better with regular updates through Steam plus it uses AMD's codec so the performance hit is minuscule and quality is brilliant and only getting better as they update it :)
 
Last edited:
I'm not surprised. When something doesn't go Nvidia's way they blame others and then try to keep it quite. The fact they lied about their DX12 tier support doesn't shock me, they lied with the 970. I don't see how they can support Asynchronous Compute either, they don't have it on the hardware level like AMD does with their ACE's. The best part is claiming it's not representative of real DX12 performance. Makes me laugh. It's a soon to be shipping game using an engine built for DX12. It's just Nvidia trying to cover their butts with false past statements. Now you guys wonder why I don't like Nvidia.. Well here it is.

Despite Nvidia's constant bad news headlines, I don't think it's going to change the market share. It'll blow over and no one will care because it benefits AMD. Even though they are getting even better with Price/Performance with DX12.
 
I'm not surprised. When something doesn't go Nvidia's way they blame others and then try to keep it quite. The fact they lied about their DX12 tier support doesn't shock me, they lied with the 970. I don't see how they can support Asynchronous Compute either, they don't have it on the hardware level like AMD does with their ACE's. The best part is claiming it's not representative of real DX12 performance. Makes me laugh. It's a soon to be shipping game using an engine built for DX12. It's just Nvidia trying to cover their butts with false past statements. Now you guys wonder why I don't like Nvidia.. Well here it is.

Despite Nvidia's constant bad news headlines, I don't think it's going to change the market share. It'll blow over and no one will care because it benefits AMD. Even though they are getting even better with Price/Performance with DX12.

Well, to be fair, Maxwell 2 does support Asynchronous compute. The Asynchronous Warp Schedulers do support this feature. The problem is in terms of context switching. When you enable Async Shading, on Maxwell 2, you have to limit the compute workloads. If you don't, you get a noticeable drop in performance. In essence, Maxwell 2 relies on slow context switching.

I don't know what's causing this issue. So I emailed Oxide, who came to our forums at Overclock.net, and explained the situation. This is where all of this info comes from.

I think it has to do with the L2 Cache shared by all of the CUDA SMMs. I think it might not have the available size and bandwidth to handle the load.

AMDs GCN, on the other hand, has a row of ACEs which reside outside of the Shader Engines. They can communicate trough the R/W L2 Cache, Global Data Share Cache and the GDDR5/HBM memory in order to synchronize, fetch and execute commands.

Another cause might be the DMA engines on Maxwell 2. They might not be as powerful or flexible as the ones found in GCN.

Just a theory.

In the end Maxwell 2 can support the feature, but loses performance when you work it too hard.

It reminds me of the GeForceFX. Which supported Pixel Shader model 2.0, but did so at a terrible performance premium.

My 2 cents.
 
Last edited:
https://www.reddit.com/r/AdvancedMi...lock_oxide_games_made_a_post_discussing_dx12/

"

[–]AMD_RobertEmployee 13 points14 points15 points 19 hours ago (13 children)
Oxide effectively summarized my thoughts on the matter. NVIDIA claims "full support" for DX12, but conveniently ignores that Maxwell is utterly incapable of performing asynchronous compute without heavy reliance on slow context switching.
GCN has supported async shading since its inception, and it did so because we hoped and expected that gaming would lean into these workloads heavily. Mantle, Vulkan and DX12 all do. The consoles do (with gusto). PC games are chock full of compute-driven effects.
If memory serves, GCN has higher FLOPS/mm2 than any other architecture, and GCN is once again showing its prowess when utilized with common-sense workloads that are appropriate for the design of the architecture"
 
Well, to be fair, Maxwell 2 does support Asynchronous compute. The Asynchronous Warp Schedulers do support this feature. The problem is in terms of context switching. When you enable Async Shading, on Maxwell 2, you have to limit the compute workloads. If you don't, you get a noticeable drop in performance. In essence, Maxwell 2 relies on slow context switching.

I don't know what's causing this issue. So I emailed Oxide, who came to our forums at Overclock.net, and explained the situation. This is where all of this info comes from.

I think it has to do with the L2 Cache shared by all of the CUDA SMMs. I think it might not have the available size and bandwidth to handle the load.

AMDs GCN, on the other hand, has a row of ACEs which reside outside of the Shader Engines. They can communicate trough the R/W L2 Cache, Global Data Share Cache and the GDDR5/HBM memory in order to synchronize, fetch and execute commands.

Another cause might be the DMA engines on Maxwell 2. They might not be as powerful or flexible as the ones found in GCN.

Just a theory.

In the end Maxwell 2 can support the feature, but loses performance when you work it too hard.

It reminds me of the GeForceFX. Which supported Pixel Shader model 2.0, but did so at a terrible performance premium.

My 2 cents.

Hmm well that would make sense as well. In parallel tasks for example, GCN is much more efficient than anything else atm which would be indicative that DX12 would run better on GCN than Maxwell, Keplar, etc as DX12 is parallel in tasks scheduling.
 
I think that amd would do the same were the situation reversed, it's just business as usual - despite how it may look to the consumer.
 
I think that amd would do the same were the situation reversed, it's just business as usual - despite how it may look to the consumer.

I doubt it. They probably wouldn't say anything tbh.

It may just be business but the image your brand holds to the consumer is everything. Nvidia keep screwing up and things come about them lieing or blaming other companies. That reflects upon them as a company. However in this market where 95% of the Nvidia buyers are blind and hardcore fanboys, they probably don't need to worry. It's a shame too.
 
Funny though, its like nVidia feel as though they have a large enough market share so they can do whatever they want. Be it screwing over customers or slinging dog turd.
 
Back
Top