Quick News

Same argument could be said regarding double point floating precision, we know AMD excel at it and Nvidia struggles, then we could also bring up Intels ability to leap forward with much higher IPCs than AMD.. Each manufacturer has there strengths and weaknesses.

Yes but people were expecting huge advances from Nvidia regarding DX12 and Async Compute. This and other reviews show it's not the case and Nvidia more than likely won't support it on a hardware level in the future, which really hurts them tbh. DX12 isn't going away and no amount Pre-emption or Dynamic loading can beat Concurrent processing. I wasn't dismissing Nvidia, just getting the info out, it's a quick news thread after all;)
 
Async compute is alright, it's worth doing if you can but not going to get HUGE gains.

Well for Nvidia that would be the case. For AMD you get massive gains. Being apart of DX12 main features, it certainly will be getting used. While it takes a little more work, it really pays off. And with Nvidia, they don't really need more work to be done. The GPU handles it itself along with the driver (although Pascal is via hardware). Dx12 is in early stages as are the Games using them. However as they get more mature, you can expect performance from using it to increase much more. As it is now, many devs are still learning it, so again some time is needed
 
Well for Nvidia that would be the case. For AMD you get massive gains. Being apart of DX12 main features, it certainly will be getting used. While it takes a little more work, it really pays off. And with Nvidia, they don't really need more work to be done. The GPU handles it itself along with the driver (although Pascal is via hardware). Dx12 is in early stages as are the Games using them. However as they get more mature, you can expect performance from using it to increase much more. As it is now, many devs are still learning it, so again some time is needed

I have profiled async compute on GCN hardware.
 
I have profiled async compute on GCN hardware.

ok?
While for you it didn't work well, looking at AotS you can see it has a clearly huge jump in performance. It has a lot of variables that factor into it. While a simple implementation will get you a little bit more performance, it still has a lot of potential for more, but I am not blind, I know(you can tell easily looking at the games) it has no performance gains to a lot of it. Like I said before, going to take time.
 
ok?
While for you it didn't work well, looking at AotS you can see it has a clearly huge jump in performance. It has a lot of variables that factor into it. While a simple implementation will get you a little bit more performance, it still has a lot of potential for more, but I am not blind, I know(you can tell easily looking at the games) it has no performance gains to a lot of it. Like I said before, going to take time.

Meaning I'm going to take first hand experience over what you claim.
 
Meaning I'm going to take first hand experience over what you claim.

You can take that experience, never said you couldn't. I'm telling what we can see in the games that are shipping. And again, like i said, it's only going to get better as things mature. I'm not spewing out random facts, it's pretty clear what gains we can see. AMD themselves have said about 5-10% improvement as of now for a general case, however AotS is the only one so far that goes beyond that for Async Comp. It's the most mature engine for DX12. You can see where the potential is. You can take your experience, but seeing the facts and AMD backing it up(as have other devs from say Hitman) you can make these "claims".

Don't know why everyone is trying to argue with me.. All I originally reported was that Pascal wasn't supporting Async comp and instead went down the better Pre-emp path with Dynamic loading.. then it gets turned into this and that.
Edit: I've NEVER doubted you SPS, in fact I take your word on this stuff pretty seriously. However take note i'm not disagreeing with you, but that as of now it's not entirely mature and things aren't up to par where they should be. I already said it has none to limited to lots of performance gain as we can see in multiple examples. You've attested to the limited gains, i don't doubt it. You can see that now in games, but in others you don't. Just going to take time where as a whole Async will improve performance across the board. Just apparently a lot more time.
 
Last edited:
You can take that experience, never said you couldn't. I'm telling what we can see in the games that are shipping. And again, like i said, it's only going to get better as things mature. I'm not spewing out random facts, it's pretty clear what gains we can see. AMD themselves have said about 5-10% improvement as of now for a general case, however AotS is the only one so far that goes beyond that for Async Comp. It's the most mature engine for DX12. You can see where the potential is. You can take your experience, but seeing the facts and AMD backing it up(as have other devs from say Hitman) you can make these "claims".

Don't know why everyone is trying to argue with me.. All I originally reported was that Pascal wasn't supporting Async comp and instead went down the better Pre-emp path with Dynamic loading.. then it gets turned into this and that.
Edit: I've NEVER doubted you SPS, in fact I take your word on this stuff pretty seriously. However take note i'm not disagreeing with you, but that as of now it's not entirely mature and things aren't up to par where they should be. I already said it has none to limited to lots of performance gain as we can see in multiple examples. You've attested to the limited gains, i don't doubt it. You can see that now in games, but in others you don't. Just going to take time where as a whole Async will improve performance across the board. Just apparently a lot more time.

All I'm saying is take things with a pinch of salt. It's hard for you to know without graphics debugging/profiling tools where gains are made. Of course AMD are going to make the most out of it from a marketing stand point. Async compute definitely has it's uses, but also has it's restrictions.

I don't think people are arguing with you, I'm certainly not. If you post about AMD doing this and that better than NVidia, then of course people are going to play Devil's advocate, this is a discussion board after all. :)
 
All I'm saying is take things with a pinch of salt. It's hard for you to know without graphics debugging/profiling tools where gains are made. Of course AMD are going to make the most out of it from a marketing stand point. Async compute definitely has it's uses, but also has it's restrictions.

I don't think people are arguing with you, I'm certainly not. If you post about AMD doing this and that better than NVidia, then of course people are going to play Devil's advocate, this is a discussion board after all. :)

I didn't post about AMD doing better until others started too.. I just originally said Nvidia are foregoing Async and are doing the other route of Pre-emp and Dynamic loading to better hide there inability to do it. It helps them lose less performance, which is good for them. It's just not as effective as Async is, but if it doesn't hurt them or really help them much(for now) then it's not that big of a deal. What made it a big deal is that before Pascal launched, everyone was so hyped for better DX12 performance because of Async comp, but that article just proves that wrong. That's why I reported it:)
 
I didn't post about AMD doing better until others started too.. I just originally said Nvidia are foregoing Async and are doing the other route of Pre-emp and Dynamic loading to better hide there inability to do it. It helps them lose less performance, which is good for them. It's just not as effective as Async is, but if it doesn't hurt them or really help them much(for now) then it's not that big of a deal. What made it a big deal is that before Pascal launched, everyone was so hyped for better DX12 performance because of Async comp, but that article just proves that wrong. That's why I reported it:)
It's not really a case of Nvidia hiding their "inability" to do Asyncronous Computing, they could if they wanted to just change things with firmware updates or driver integration later. It's just that Pascal does it differently and it's efficient for Nvidia to do it their way with Pascal. We are looking at infant cards right now that will mature brilliantly. The AMD/NVIDIA battle is on :cool:

Amd release new cards with perfect Async it's a win for them? maybe, but Nvidia will just come back with RAW power and steal the show "Again" the 1080 is already proof of the middle ground raw power, the numbers for 1070 are looking mighty tasty too.. 1080Ti? ZEN? Polaris? Exciting times ahead.

To have a debate right at this very moment in time when we the consumers are only limited by reviewer details and sketchy roadmaps while fun and often entertaining is futile..
 
No they have some ability to do it.. however it's pretty bad. It's not something you can change via firmware or drivers either. It's a hardware implementation. That is why they instead are focusing on Pre-emption and the Dynamic Loading. They improved those methods so it helps reduce the time it takes to process either compute or graphic workloads. It still has to pause and switch tasks though, so it's still inferior than doing Parallel processing and is not as efficient. It's more efficient for Pascal, however previous gen Maxwell(and before) do not get the improved Pre-emp or Dynamic Loading because it was a hardware update. So really Pascal is the only one who gets better at it. Previous cards will continue to suffer the reported loss of FPS in DX12.

Raw power is great yes, however, if you have to pause so often between tasks, comes to a point where doing them at the same time but say slightly slower, is more efficient. Besides, the 1080/1070 aren't that great of a leap. It's nothing that exciting about it. They could have done waaaay more but did not. With 16nm and FinFET, they basically slightly updated Maxwell, and then clocked the nuts off it. I posted another article in a different thread that discussed all this, once you read it from that perspective, you realize that we've seen it many a time previously.

AMD doesn't really have much to do with this tbh... I wasn't even talking about Polaris.. They have had Async since 2011, in comparison yes it is perfect to Nvidia. But that's not the point I was making.

Unless Nvidia are creating a secret Asnyc driver and aren't saying anything, this stands true. However I really doubt it. Why would they waste there time and money on improving there PreE/DynL when they could have implemented Async directly? it's just wasting transistors that could have gone to something else
 
Please remember guys that this is the quick news thread, if you want a long conversation on a particular topic please make a dedicated thread.

This thread is for quickly delivering some interesting tidbits of news and allowing the forums to send some quick responses and thoughts on the issue. The whole point of this thread is for it to be a quick read.

thanks.
 
Last edited:
So some early DX12 benchmarks were released for Warhammer over on PCWorld.

total-war-warhammer-perf-100662439-large.png


AMD takes the DX12 crown here as well. Keep in mind that neither AMD/Nvidia have official Warhammer drivers for the game or the DX12 version either. However based off previous titles, I don't expect this to change. Due to AMDs better Async Compute performance(warhammer uses this). Unfortunately not many cards were tested here(i feel on purpose, but different matter all together).

Source: PCWorld

civilization-6.jpg

To avoid double posting(not sure if permitted here for different news??) I'll add some info about Civilzation 6
In an interview with IGN, Civilization 6’s Art Director Brian Busatti, talked about many new features.
Bigger unit sizes: Can tell size difference between Warrior and Archer for example whether zoomed in/out.
Greater detail up close: Higher unit variation, such as Pikemen looking different depending on the culture of the faction.
NEW UI: Updated UI to an Age of Exploration, like 15th/16th century. Compasses, etc.
One of the biggest new features: Fog of war. Replacing the old Cloud system in Civ V, it now has function behind it, although details aren't announced yet.
Day/Night Cycle: This feature was implemented SOLELY because of the Fog of War. You can tell they are serious... also leads into more "ideas" later on. Hinting expanded features for DLC/Expansions.

Source: IGN
 
Last edited:
Here is some interesting info for you. We have been announced prices of AIB here in Norway

all prices from komplett.no which is our biggest e-tailer here

Reference (founders) 7600nok - £597.48
Asus rog strix - 7500Nok - £589.62
MSI Gaming X - 8099Nok - £631.53
EVGA w ACX 3.0 - 6599Nok - £518.79

And before you start calling them cheap remember our currency crashed and the pound gained strength. 8000Nok for a card is f*****g crazy. I think im skipping the entire Pascal series.

Maybe saying farewell to Nvidia for the first time in 10years

edit.. i wouldnt be surprised if these are reference boards with aftermarket coolers. No specs are given yet, just prices.
 
Last edited:
Looks like TWW is REALLY benefiting from DX12. As expected for the TW series which is known for CPU limitations. DX12 was made for this game it seems.

DX11
JhxofSA9GKEEcxZ3Wdj69b.png


DX12
wDXeenueCnQWjiyUmWMehc.png


Fury X: Nearly 30FPS avg increase.
1080: About 25FPS increase.
Biggest gains I noticed right off the bat. Insane!
AMD certainly benefits once again more than nvidia in regards to DX12. For example, taking two evenly matched GPUs, 970vs390. You can see from the API switch, 970 only manages to gain ~8FPS. In comparison to the 390s ~22FPS increase.. impressive to say the least, even managed to beat out the 980.

What i think is interesting the most, this is the first title that Nvidia doesn't either suffer or gain 0 performance from switching the APIs. This comes down to the CPU limits. However I still feel the engine itself is inherently limited rather than the CPU, evidenced by shown below.

DX11 CPU:
z8ExJ5ptM7gMmY8QxZN4oB.png


DX12 CPU:
XxhXDJKP2x6Z9jPMP7XgjD.png


Again mighty impressive. Looks like CA and DX12 are really getting along!
Post is getting little long for a quick news post.. so MUCH more info found over on good'ol PCGamer

Source: http://www.pcgamer.com/total-war-warhammer-benchmarks-strike-fear-into-cpus/
 
Last edited:
Back
Top