GTX 970 3,5GB + 0,5GB with 56 ROP and less cache than advertised.

Do you mean this test?

This is flawed because it access the 500MB on its own and not alongside the 3.5GB like it would when gaming.

Check out the article on PCperspective, they go into quite a bit of detail and can explain it better than I can, but basically that benchmark isn't giving a proper reading like you would get in a gaming scenario.

iT3wKdj.jpg
 
Do you mean this test?

This is flawed because it access the 500MB on its own and not alongside the 3.5GB like it would when gaming.

Check out the article on PCperspective, they go into quite a bit of detail and can explain it better than I can, but basically that benchmark isn't giving a proper reading like you would get in a gaming scenario.

No but it explains the issue. And if you force the memory to allocate 1gb from the start e.g. run BF4 and then kill it, it can give you game like results. I just posted the picture to show some who are probably wondering what the issue is with their 970.
 
No but it explains the issue. And if you force the memory to allocate 1gb from the start e.g. run BF4 and then kill it, it can give you game like results. I just posted the picture to show some who are probably wondering what the issue is with their 970.

It shows that the 500MB is slower but it doesn't show how much of an impact it actually has.

When you are playing a game the 3.5GB reads from the 500MB and not on its own like in that benchmark. This is where the performance drop is being said to come from and causing stutter. Because the 500MB is slower than the 3.5GB it is causing performance to drop when the 3.5GB reads from the 500MB.

The reason bandwith drops off so much in that benchmark is because the 500MB is being accessed on its own and not being read by the 3.5GB.

What we need is tests from in game that show how much of an effect it has on performance when the 500MB is accessed alongside the 3.5GB. Accessing the 500MB on its own doesn't give a proper reading because it isn't designed to be used on its own, it is meant to be used along side the 3.5GB.
 
It shows that the 500MB is slower but it doesn't show how much of an impact it actually has.

When you are playing a game the 3.5GB reads from the 500MB and not on its own like in that benchmark. This is where the performance drop is being said to come from and causing stutter. Because the 500MB is slower than the 3.5GB it is causing performance to drop when the 3.5GB reads from the 500MB.

The reason bandwith drops off so much in that benchmark is because the 500MB is being accessed on its own and not being read by the 3.5GB.

What we need is tests from in game that show how much of an effect it has on performance when the 500MB is accessed alongside the 3.5GB. Accessing the 500MB on its own doesn't give a proper reading because it isn't designed to be used on its own, it is meant to be used along side the 3.5GB.

What do you mean the 3.5GB reads from the 500MB? :huh:
The other article states that it's not a game issue because it allocates from the 3.5GB for on-demand resources and uses the 500MB if it really needs to (whether that's true I don't know).
 
What do you mean the 3.5GB reads from the 500MB? :huh:
The other article states that it's not a game issue because it allocates from the 3.5GB for on-demand resources and uses the 500MB if it really needs to (whether that's true I don't know).

The article and video on Pcperspective can explain it better than I can.
I'm no expert, i'm just explaining it in the way I understand it :)
 
What do you mean the 3.5GB reads from the 500MB? :huh:
The other article states that it's not a game issue because it allocates from the 3.5GB for on-demand resources and uses the 500MB if it really needs to (whether that's true I don't know).

To test this accurately it would be incredibly difficult for it to dip in to the 500 while not having used up the 3.5 yet. Since the memory will always be allocated in chunks of 128. You would need the game to use 3456mb for the memory to then decide it needs the next chunk and begin using 500mb partition.

And I think thats why it hasnt been found sooner.

As you said. PCper explain it well. This might also help

https://www.youtube.com/watch?v=b74...4688&feature=player_embedded&x-yt-cl=84503534

It shows the slow partition isnt slow enough to affect gaming.

edit.

Also fyi I just learned that GPUz does not read the ROPs from the Bios. it is a predefined value stored based on nvidia specs which is why its not correctly giving 52ROP
 
Last edited:
One thing that has not been mentioned, is that all the companys that build the actuall cards must have known about that, too. I have not seen anybody bashing them. In the end they are the ones that put the specs on the boxes etc. of their product.

I have a 970 Strix an tbh I have not exactly made my mind up on what I think about the whole thing. But it does not exactly give me a good feeling about nvidia.
 
One thing that has not been mentioned, is that all the companys that build the actuall cards must have known about that, too. I have not seen anybody bashing them. In the end they are the ones that put the specs on the boxes etc. of their product.

I have a 970 Strix an tbh I have not exactly made my mind up on what I think about the whole thing. But it does not exactly give me a good feeling about nvidia.

That is a good point too
 
Well what a mess, but if im honest i am not really bothered by this because every company has done something shitty anyway. Im not going to defend nvidia but my next card will still be nvidia just because i like them on the basis that their power consumption and noise levels is something amd should really focus on competing with, along with performance ofcourse.

I understand why people are reacting the way they are but i think people shouldnt be selling an otherwise pretty good card because of it, best thing to do is see what nvidia will or wont do and then make a decision based on that.

Nvidia, wtf though.
 
Well what a mess, but if im honest i am not really bothered by this because every company has done something shitty anyway. Im not going to defend nvidia but my next card will still be nvidia just because i like them on the basis that their power consumption and noise levels is something amd should really focus on competing with, along with performance ofcourse.

I understand why people are reacting the way they are but i think people shouldnt be selling an otherwise pretty good card because of it, best thing to do is see what nvidia will or wont do and then make a decision based on that.

Nvidia, wtf though.

Good point
 
I have no idea when AMD will have their next generation of GPUs releases, but when they do I think I am jumping ship.

I was a huge Nvidia fan when I first started PC building, but then I went to AMD with the HD 7950 when they had that first epic never settle bundle (with like 5 or so new games), I was AMD since then but I went for a GTX 970 because Nvidia had released a truely fantastic product.

Now with all the ways Nvidia has been acting recently when it comes to Gameworks and and now this, I don't trust the way Nvidia operates, which is a shame because I do love a lot of their technologies.
 
I have no idea when AMD will have their next generation of GPUs releases, but when they do I think I am jumping ship.

I was a huge Nvidia fan when I first started PC building, but then I went to AMD with the HD 7950 when they had that first epic never settle bundle (with like 5 or so new games), I was AMD since then but I went for a GTX 970 because Nvidia had released a truely fantastic product.

Now with all the ways Nvidia has been acting recently when it comes to Gameworks and and now this, I don't trust the way Nvidia operates, which is a shame because I do love a lot of their technologies.

I haven't had an AMD card since like 2010 (5000 series) maybe i could be time for me to jump ship too
 
Either way I'm happy with the performance of my 970, even if it was marketed as a 3.5GB I still would have got one. But I do feel mislead, you can't really call it a true 4GB card.
 
nvidia just because i like them on the basis that their power consumption and noise levels is something amd should really focus on competing with, along with performance ofcourse

That has more to do with the company that makes the card's cooler more then anything else.
 
That has more to do with the company that makes the card's cooler more then anything else.

Forcing everyone to use the reference blower was AMD's call, and it was probably a bad one for everyones impression of the card tbh. Can't say I like nVidia's no reference approach to the 970 either though, the 980 strategy was perfect.

JR
 
Last edited:
That has more to do with the company that makes the card's cooler more then anything else.

True, but it was also amd that didnt allow custom coolers on the 290 series untill months later, while theirs was and still is a leafblower. That strikes me the wrong way because clearly the custom coolers that were already out could have handled that chip without much hassle.

So on one hand we have nvidia being a liar about some memory and rops which doesnt affect most people, cmon be honest it is still a good card and on the other hand we had amd forcing the shittiest cooling solution on its customers and say, its designed to run at 95 degrees C. both are pretty crap moves from these companies.
 
Back
Top