I cannot understand the 24 GB.
This obviously will cost a lot to produce - so why this massive increase? Is it needed to get the bandwidth/speed increase? Or are there games that needs this to hold textures? Seeing how people get along nicely on 8/11 GB now it seems like a larger increase than what normal 'evolution' would demand. Even 16GB on a top card seems generous.
Are there other uses / benefits for more RAM on the cards?
The higher you go in resolution the higher the memory bandwidth needs to be. Hence the multiples. Unless you derp the memory controller and decrease the bandwidth you would have problems.
As an example, the 1080Ti 11gb.
MEMORY BUS WIDTH 352 bits
MEMORY BANDWIDTH 484GB/s
VS Titan Xp.
Memory interface width: 384-bit
Memory bandwidth: 547.58 GB/s
So you can see the decrease by derping the memory controller and reducing the memory amount by just 1gb.
Now. Memory bandwidth was kinda important on the XP and Ti. However, nowhere near as important as it will be now. 4k is incredibly demanding on memory bandwidth, and the next gen games (especially RT) will be very demanding on the VRAM. So if they halved the VRAM to 10gb? then all of that power will be hobbled by the low memory bandwidth.
The bigger the textures get the higher that bandwidth needs to be. This is why AMD were so stupid to use HBM, because at the time having all that bandwidth did squat. However, with PCIE 4 and the fact that NVME drives on PCIE4 are ludicrously fast they can finally utilise the storage speed too.
You know how for ages it wasn't worth storing a game on a SSD because the load times barely improved, apart from a small handful of titles?
Well you can totally expect that to change once these new consoles launch.
Why wasn't it improved before? because the consoles did not come with SSDs.
You also need to bear in mind that even though the PC has seen a massive uptick in users (because of games like Fortnite and PubG and the popularity among younger gamers) they (the games) are still coded primarily with the consoles in mind. Hence, any improvements for the PC need to be added in later (IE much faster storage, multiple GPUs, remember those? ) and so on, which they usually do as little as they need to do.
Apart from Rockstar, who genuinely do put in a lot of effort on their ahem, "ports".
Its more about giving Nvidia the option of releasing a 2080 Super/Ti in 2021 with more than 10GB of VRAM.
24GB serves no useful purchase other than that and maybe a bogus reason to justify the absurd price the 3090 will no doubt be priced at.
That is not true. See the above, and what happens when you derp the memory by just 1gb.
There are many falsehoods doing the rounds at the moment as to why Nvidia are "ripping us off innit" and "their margins are higher than evarrr !" which both of which are BS.
Apparently Nvidia work to a 60% margin. Always have, always will. The reason GPUs have gotten expensive? because we keep demanding more and more performance. So they deliver it.
Also, like I did elsewhere I will try to explain why the 2080Ti cost so much. Again, these are facts, ignore them if you like but it's very narrow minded to do so !
1. Nvidia were not going to release Turing. It was Ampere. Samsung's node failed and it was delayed, and Nvidia had to wait for Samsung to change the node to make it even useable. It started as 8nm, now it is 7. So they needed a whole node just to get it to work.
2. The 2080Ti die was absolutely frigging ENORMOUS. 772mm2. Compare that to the 1080Ti? it was 440 odd mm2. So that is about 35-40% larger than Pascal.
3. Nvidia did not want to use TSMC, as they were involved with Samsung. Why? because TSMC are really expensive. Turing was basically a slightly shrunken Pascal on TSMC with the tensor cores bolted on, hence the massive die. Massive dies are monolithic, cost a fortune, failure rates soar (because one bad area means dead core).
4. Nvidia had a deal with TSMC to provide only working dies. IE, TSMC would swallow some of the dead ones. This meant that basically TSMC would have added in at least some of that cost on the 2080Ti die. There's no way they would have swallowed it all.
5. TSMC *are* expensive. AMD are OK, because they go for Ryzen with its chiplet design meaning lots of working dies per wafer. However, as already explained the 2080Ti die was bloody huge. Meaning huge cost.
And that is quite probably why Turing should get spanked by Ampere because Ampere was the ground up design, not Turing. As I explained, that was a slightly shrunken Pascal on a massive die with tensor cores.
Hence the supposed enormous uplift in RT performance.
To add.
These Samsung dies are not as good as TSMC, BTW. The enormous 2080Ti used 250w TGP. The 3090 uses 350w TGP.
The 3090 is a failed Quadro, but not in the usual sense. It is not a complete failure, it just uses way too much power. So it can't be used as a Quadro, as those things need to go in rack servers etc and need to behave perfectly when it comes to thermal power. You can not shove a 500w+ card (overclocked) into a server.
With us, the home users? they will leave that to us. Water blocks, loads of air flow, big ass coolers etc etc.