Nvidia GTX980Ti Review

You guys should remember that games will use as much vram as they can... If you had 4GB it will use upto 4GB.. if you had 8GB on the same chip the game will extend its usage closer to 8GB. BF4 is a great example of this. Just because it has 12GB of vram and software reports it's using 8GB, doesn't mean it actually needs all of it. It just stores more data in the pool so it won't have to process it again later. A good modern engine will scale pretty well.
Oh and 4GB is not in theory virtually 4GB. Based off of improved color compression algorithms and other tech involved with vram, the ending result can actually be less depending on how efficient the algorithms are. But yes physically 4GB = 4GB.
just thought I could clear that up.. hopefully it makes sense

Try loading Watch Dogs maxed @2160p on some 4gb 290Xs and see what happens.:D

All you will have is a blank screen as the cards won't even load it properly let alone run it. I think exactly the same will happen to the 4gb Fiji cards.
 
But yes physically 4GB = 4GB

Completely true, until present day HBM Stacked memory exists on the same Die as the GPU Cores.

The possibility exists that this new design idea may result in drastically lower latency, (because the memory and GPU are residing in close proximity) and it may just answer the GPU's calls fast enough to clear those calls before any choking occurs.

I hope that it's so. If not, the GTX-980Ti will be my method of suffering.:p

We only have a few days to wait as it is.
 
Try loading Watch Dogs maxed @2160p on some 4gb 290Xs and see what happens.:D

All you will have is a blank screen as the cards won't even load it properly let alone run it. I think exactly the same will happen to the 4gb Fiji cards.

WD isn't exactly a great example. It uses far more than it should, I've seen other games with even bigger scale and better graphics use less. I'd consider WD unoptimized. With Fiji and it's improved color compression algorithms it may help a little bit. Another horrible example would be COD:AW.. that's just memory leak happy..
 
WD isn't exactly a great example. It uses far more than it should, I've seen other games with even bigger scale and better graphics use less. I'd consider WD unoptimized. With Fiji and it's improved color compression algorithms it may help a little bit. Another horrible example would be COD:AW.. that's just memory leak happy..

All games are good examples because they are games !!!

GTA V is another 8gb+ memory hog
 
All games are good examples because they are games !!!

GTA V is another 8gb+ memory hog

No not all are because not all are that well optimized like the previous two I mentioned. if it memory leaks then it's not exactly utilizing memory management properly..
I've seen 2GB cards run the game easily at 1440p maxed out... hell even a 970 with all its 3.5gb of vram at 4k is enough maxed out with 0msaa. It shouldn't use 8GB.. if you have msaa on at 4k I can see about 6GB.. but at 4k you don't need it. 4GB is still the sweet spot for gaming
 
No not all are because not all are that well optimized like the previous two I mentioned. if it memory leaks then it's not exactly utilizing memory management properly..
I've seen 2GB cards run the game easily at 1440p maxed out... hell even a 970 with all its 3.5gb of vram at 4k is enough maxed out with 0msaa. It shouldn't use 8GB.. if you have msaa on at 4k I can see about 6GB.. but at 4k you don't need it. 4GB is still the sweet spot for gaming

If a game exists and people want to play it, then it does not matter how well it is written all that matters is people want to run it.

As to 8XMSAA @2160p it does make a difference in games and is well worth running.
 
That means textures, maps, etc are already designed and made and they won't go back and redo it all to make use of what a PC could offer.

So for example we may never even see a real, actual 4k game. What we will see however is those console textures upscaled to 4k and nothing more.

Not true - textures will almost definitely be authored for 4K resolutions and down-scaled for consoles these days.

You guys should remember that games will use as much vram as they can... If you had 4GB it will use upto 4GB.. if you had 8GB on the same chip the game will extend its usage closer to 8GB. BF4 is a great example of this. Just because it has 12GB of vram and software reports it's using 8GB, doesn't mean it actually needs all of it. It just stores more data in the pool so it won't have to process it again later. A good modern engine will scale pretty well.
Oh and 4GB is not in theory virtually 4GB. Based off of improved color compression algorithms and other tech involved with vram, the ending result can actually be less depending on how efficient the algorithms are. But yes physically 4GB = 4GB.
just thought I could clear that up.. hopefully it makes sense

This is somewhat true - game engines will often pre-allocate a percentage of available VRAM for pooling. This will help to stop popping of geometry/textures as they do not need to be streamed in from disk.

No not all are because not all are that well optimized like the previous two I mentioned. if it memory leaks then it's not exactly utilizing memory management properly..
I've seen 2GB cards run the game easily at 1440p maxed out... hell even a 970 with all its 3.5gb of vram at 4k is enough maxed out with 0msaa. It shouldn't use 8GB.. if you have msaa on at 4k I can see about 6GB.. but at 4k you don't need it. 4GB is still the sweet spot for gaming

Different engines use MSAA in different ways - don't expect the same usage across engines.
 
It's just best practice dude and how every game studio I've worked in does it.

Actual 4k users are less than 1% of all users atm, according to the last Steam survey I saw. So considering 1440p users are a tiny margin of overall users, and that the console game is the priority, and that consoles are not powerful enough to throw around 4k I'm having terrible trouble believing that a game studio would bother.
 
Actual 4k users are less than 1% of all users atm, according to the last Steam survey I saw. So considering 1440p users are a tiny margin of overall users, and that the console game is the priority, and that consoles are not powerful enough to throw around 4k I'm having terrible trouble believing that a game studio would bother.

When you consider the life cycle of a game and the time it takes to go from concept to finished article, and for how long after it's released it needs to be competitive to provide revenue. Then it's pretty easy to understand why a studio would want to work to a quality beyond what is 'currently' accessible to the masses and then scale it back accordingly.

JR
 
Actual 4k users are less than 1% of all users atm, according to the last Steam survey I saw. So considering 1440p users are a tiny margin of overall users, and that the console game is the priority, and that consoles are not powerful enough to throw around 4k I'm having terrible trouble believing that a game studio would bother.

When you edit photos for a build log, do you save the source at 1024 or a much higher res than what you put on the forum?

It's the same reason that film makers were recording in 4K long before 4K panels were out.

Also mobile devices require pretty high res images these days.
 
I like how another thread, a thread regarding the 980Ti, has gone from that to a subject of games developments, how they work and game devs lol :p
 
This is somewhat true - game engines will often pre-allocate a percentage of available VRAM for pooling. This will help to stop popping of geometry/textures as they do not need to be streamed in from disk.

Different engines use MSAA in different ways - don't expect the same usage across engines.

Ya I didn't quite know how to word it. But thanks for clearing that up:p
Of course they use MSAA in different ways, I was just providing info on what I have seen in games, like GTA V.
 
I'm a bit curious why they suddenly rushed the 980 TI out? Is it because they want as much revenue as they can before 390X will be out or will AMD actually be able to take the torch in best GPU this time. It just seems odd since they already had the Titan out and could possibly earn some good money on that before they released this. We will just have to see.

There was a leak on the 13th or 14th, Apparently the top AMD "X" card is right there in performance with the Titan X.

The 390's are below the 980 in performance in this leak, but ahead of the 970.

If the leak proves true, it's probable that AMD might have been targeted at a price above the Ti, and nVidia had some intel, thus the $350 price difference. Otherwise, if the leak is true, there will probably be a price war (let's hope).

Posts merged - Please do not post multiple times in a row

No idea what you just said mate :huh:...

Just watch the review again, you'll catch the word he used that I was referring to. It's not a common word here in America, and it threw me.

Posts merged - Please do not post multiple times in a row

Completely true, until present day HBM Stacked memory exists on the same Die as the GPU Cores.

The possibility exists that this new design idea may result in drastically lower latency, (because the memory and GPU are residing in close proximity) and it may just answer the GPU's calls fast enough to clear those calls before any choking occurs.

I hope that it's so. If not, the GTX-980Ti will be my method of suffering.:p

We only have a few days to wait as it is.

I think stacking the HBM on the GPU itself has far more hurdles than can be jumped at this point. How to deal with the thermals being foremost, and longevity being second. Who wants to spend that much of their hard earned cash on a video card, just to have it slowly (or quickly?) dying from thermal issues from day one? There is a reason the G1 uses a heat sink capable of in excess of 600W of dissipation. Under load, it only pushes 72C, compared to like 86C on the reference Ti and the Titan. Stack RAM on top of that chip, and have fun trying to get rid of that heat!

I guess we find out today if AMD solved their power and thermal issues. If they did, that's another hit that could add to a price war.
 
Last edited by a moderator:
There is a reason the G1 uses a heat sink capable of in excess of 600W of dissipation. Under load, it only pushes 72C, compared to like 86C on the reference Ti and the Titan.

Heat ≠ Temperature

When a manufacturer gives heat dissipation in watts without telling you the difference in temperature between ambient air and the cold plate or the flow rate of air over the cooler then they are completely talking out of their arse. It's a useless piece of information and that's why you will never find any watercooling manufacturers giving heat dissipation figures in their specification. Gigabyte does it a lot and they really annoy me, their coolers aren't anything special at all.

JR
 
I think stacking the HBM on the GPU itself has far more hurdles than can be jumped at this point. How to deal with the thermals being foremost, and longevity being second. Who wants to spend that much of their hard earned cash on a video card, just to have it slowly (or quickly?) dying from thermal issues from day one? There is a reason the G1 uses a heat sink capable of in excess of 600W of dissipation. Under load, it only pushes 72C, compared to like 86C on the reference Ti and the Titan. Stack RAM on top of that chip, and have fun trying to get rid of that heat!

I guess we find out today if AMD solved their power and thermal issues. If they did, that's another hit that could add to a price war.


Well first off the HBM isn't on the GPU itself.. it's not even on the same die. It's connected on an interposer. This means while they are both resting on the same socket(can't think of a better word) they aren't directly touching. Heat won't be an issue more than it is now.. it'll most likely be easier to cool. Why? Well it uses less volts than GDDR5.

Heat ≠ Temperature

When a manufacturer gives heat dissipation in watts without telling you the difference in temperature between ambient air and the cold plate or the flow rate of air over the cooler then they are completely talking out of their arse. It's a useless piece of information and that's why you will never find any watercooling manufacturers giving heat dissipation figures in their specification. Gigabyte does it a lot and they really annoy me, their coolers aren't anything special at all.

JR

As JR said, Heat≠ Temperature. They are related to eachother, however are different in concept:)
Meaning for GPUs, the GPU core itself, Temperature=Watts. This means more watts you put into the core, the higher the temperature will get, but again this is compensated and can be controlled by an opposite force(air or water) to keep the temperature lower. So the core temperature is being controlled.. great examples of this were the 290x Lightning. It took a really really hot core and made it much much cooler.

Hopefully that makes sense.. Heat and Temperature is a rather tricky subject and involves physics of which I currently don't understand, also explains why Gigabytes amazing 'Windforce that can dissipate 600watts' isn't anything special and tends to be behind Asus, MSI, and others because it is misleading/wrong and purely marketing.
 
Back
Top