Gear of War: Ultimate Edition Performance Retest - The Game has been Fixed!

WYP

News Guru
We have Retested the PC Version of Gears of War Ultimate Edition using the games new performance enhancing patch. Has the game finally been fixed?

16080057852l.jpg


Read more on Gear of War: Ultimate Edition's performance improvements.
 
Last edited:
Looks like OC3D is biased towards AMD. Nvidia was faster so game was broken, now AMD is faster or ties and suddenly game is fixed. Now look at Hitman review - "AMD is simply dominant". Suddenly nothing from with that. Tides have changed and AMD gimping every game.
 
Looks like OC3D is biased towards AMD. Nvidia was faster so game was broken, now AMD is faster or ties and suddenly game is fixed. Now look at Hitman review - "AMD is simply dominant". Suddenly nothing from with that. Tides have changed and AMD gimping every game.

There's a clear difference, Gears of War was outright unplayable on AMD HW, and honestly have to lol at the idea that AMD is gimping games, because they totally have the market share and money to do so.............
 
Nice to see they've fixed it, I personally am still not touching it on Windows Store, or in it's current state of encryption and lack of options.

But if you watched the early benchmarks the fact that AMD came out ahead isn't a huge surprise, as the older GCN 1.0 didn't appear to be gimped as hard and compared to their Kepler counterparts atleast were quite ahead with the 280X/7970 often ahead of 780/Ti's!
 
Last edited:
There's a clear difference, Gears of War was outright unplayable on AMD HW, and honestly have to lol at the idea that AMD is gimping games, because they totally have the market share and money to do so.............

Interesting how everything can be buried with simple "no money to do so".

If AMD don't have money how they managed:
-develop HBM
-get Never Settle bundles
-make Gaming Evolved games heavy compute to gimp Kepler
-force all developers to use Async instead of better graphics

https://www.youtube.com/watch?v=rpDdOIZy-4k
 
Interesting how everything can be buried with simple "no money to do so".

If AMD don't have money how they managed:
-develop HBM
-get Never Settle bundles
-make Gaming Evolved games heavy compute to gimp Kepler
-force all developers to use Async instead of better graphics

Not nearly as interesting as your ability to ignore the real point, and claim I said something I didn't, saying AMD can't afford to bully nVidia isn't only about money actually, but also marketshare, and saying that AMD can't afford it doesn't equal that AMD has no money, they obviously do, however there's some flaws in your logic.
-Developed HBM, in cooperation with Hynix, and according to AMD they don't have any licensing deal, as in they don't make money when Hynix or anyone else makes HBM memory.
- Adding a few games with your software partners is not a massive cost.
- lol, BF4 a GE title worked just fine on Kepler, it's only later titles where Tahiti(79xx & 280/x) keeps up with 780/Ti, and most of them aren't GE titles, in fact Far Cry Primal is a TWIMTBP title, albeit a less than usually gimped.
-Async is part of DirectX 12, not using it would be silly from a developers side because an architecture that supports it gets more performance(From what I read it makes it possible to do compute tasks simultaneously as you do a render task), just because AMD happens to support it doesn't mean these developers are being AMD friendly by using it, and I'm not even sure all DX12 titles uses it, GOW:U for one.

Here's what AMD partners get, look at Tomb Raider 2013, now look at Rise of the Tomb Raider, 2013 was GE and used TressFX, ROTR isn't and doesn't, the hair looks similar, but because AMD's GE partners get the source code for the technology, so in case of TressFX, Eidos developed it further and made it their own, you literally can't do that with Gameworks features as they are added as packages on the side, AMD does not gimp, if developers do on AMD's behalf then honestly they are making boneheaded decisions, but they can control the technology they've implemented from AMD. I don't think TWIMTBP supported developers try and cripple AMD HW, but the gameworks feature they add, often in the last minute does.
 
Nice to AMD hardware finally be usable in this game.

Hopefully in the future they will add the rest of the features that should have been in from the start, like SLI/Crossfire support, FPS Cap removed, Full Screen Option etc etc.

After thinking about it the last couple of days, honestly anytime a game shows better performance on a certain manufacturers hardware, there will always be people coming out saying that company has gimped the game for the other.

Hopefully with DX12 and other such API's we can get away from these "gimping" claims but it will also need hardware manufacturers to actually work to better the gaming industry rather than to try and put the other company out of business, but to me that is not going to happen.
 
Interesting how everything can be buried with simple "no money to do so".

If AMD don't have money how they managed:
-develop HBM
-get Never Settle bundles
-make Gaming Evolved games heavy compute to gimp Kepler
-force all developers to use Async instead of better graphics

Found my new fave troll :wub:
 
-Async is part of DirectX 12, not using it would be silly from a developers side because an architecture that supports it gets more performance(From what I read it makes it possible to do compute tasks simultaneously as you do a render task), just because AMD happens to support it doesn't mean these developers are being AMD friendly by using it, and I'm not even sure all DX12 titles uses it, GOW:U for one.

Just hold on right there ....

Is this not the exact same argument that can be made for devs using anything gameworks? Just because AMD cards struggle more with increased tessalation which is a part of DX should developers not use it?

A lot of nvidia tech makes large usage on tessalation, that's what they are good at doing. Saying AMD is fine for using async because they are better at doing it completely nullifies any argument against nvidia at the same time. I agree that it would be a stupid move for any company to develop tech that doesn't make their hardware seem more appealing, it's a sound business model. But you can't say one hardware manufacturer is better than the other and deserves less abuse for doing exactly the same thing.
 
Just hold on right there ....

Is this not the exact same argument that can be made for devs using anything gameworks? Just because AMD cards struggle more with increased tessalation which is a part of DX should developers not use it?

A lot of nvidia tech makes large usage on tessalation, that's what they are good at doing. Saying AMD is fine for using async because they are better at doing it completely nullifies any argument against nvidia at the same time. I agree that it would be a stupid move for any company to develop tech that doesn't make their hardware seem more appealing, it's a sound business model. But you can't say one hardware manufacturer is better than the other and deserves less abuse for doing exactly the same thing.

I see the similarities but no it's not at all the same though, remember using Tesselation on AMD HW was never the issue, it was an issue however when Crysis 2 had tesselated water where you couldn't see it, or flat surfaces that ended up being thousands and thousands of polygons, wasting resources like crazy impacting both AMD and nV users, it would just impact nV users less.

So to recap up No one ever suggested that you didn't use tesselation, they just suggested you didn't abuse it like it was for very little visual gain Async is a way to basically get free performance out of DX12, the fact that nVidia claims to support it but doesn't is quite different.
 
The underlying problem will always remain. Every hardware manufacturer that works along side a dev will always push to use methods that benefit their hardware. Unfortunately, unless there was a magical unified agreement it will continue causing the same green vs red fanboyism. As for any Windows live based stuff, everyone is screwed. Too many limitations and control, then there is the issue they are generally port that were designed to work on AMD hardware to start with. Even GoW being "optimised" by nvidia just ended up in it being broken for all.

I hope all future hardware from any team starts to support features in DX12 that will improve games in every aspect without negatively effect in the others. We can only dream!
 
Async is a way to basically get free performance out of DX12, the fact that nVidia claims to support it but doesn't is quite different.

AsyncCompute is a good way to utilize unused compute units but if you have no free CUs you're not going to gain anything.
 
Good to see Microsoft acknowledged the issues and done something. I can't see the windows store taking over though.
Origin tried it U-play is trying it. Let's see what games they hold hostage and it will determine how many are "forced" to use the windows store. The next minecraft I expect will do a lot for the Microsoft store or maybe a halo or something.
 
The underlying problem will always remain. Every hardware manufacturer that works along side a dev will always push to use methods that benefit their hardware. Unfortunately, unless there was a magical unified agreement it will continue causing the same green vs red fanboyism. As for any Windows live based stuff, everyone is screwed. Too many limitations and control, then there is the issue they are generally port that were designed to work on AMD hardware to start with. Even GoW being "optimised" by nvidia just ended up in it being broken for all.

I hope all future hardware from any team starts to support features in DX12 that will improve games in every aspect without negatively effect in the others. We can only dream!

Is that even an issue? No one argues that nVidia shouldn't optimize for their architectures by using tesselation to make the game looks better or their own implementation of it, but that was not what nVidia did, it was specifically used over-tesselation to hurt everyone including nVidia users, just hurting them slightly less.

GOW:U is a trainwreck, the fact that they decided to polish the old engine instead of making use of a newer engine I'm guessing is the core of their issues, more than nVidia's Gameworks, I used to think it was GW since appearently the game had less issues with HBAO+ turned off, but then again seeing how it worked fine on GCN 1.0 cards, and that the requirements in general are insane compared to the original, I've started thinking it's just the devs that has done goofed, and nVidia's simply managed to make it less a trainwreck for their cards.

AsyncCompute is a good way to utilize unused compute units but if you have no free CUs you're not going to gain anything.
Needed to refresh my memory on this and research a bit, and it's true, but it's very unlikely you don't get those CU's, or idle between rendering tasks that allows you to make use of the CU.
https://www.youtube.com/watch?v=v3dUhep0rBs
Imo a very comprehensible and simple description of what Async Compute does.
 
Back
Top