Alleged 4-slot GeForce RTX 5090 Founders Edition heatsink revealed

Didn't this happen with the 4090 as well? Rumours of a 4-slot 500W GPU ended up being wrong. The 4090 ended up being really efficient and had a 'normally big' cooler.
 
Motherboards are already having a meeting together crying and fearing the weight these monsters will put on their apendages aka GPU slots.

The GPU SAGa continues
 
Didn't this happen with the 4090 as well? Rumours of a 4-slot 500W GPU ended up being wrong. The 4090 ended up being really efficient and had a 'normally big' cooler.

Doesn't look like the same situation for me, the alleged 600W 4090 could always be made, it's just that AMD barely competes with the 4080 so NVIDIA doesn't have any incentive to make the 4090 draw more power than it curretly does and taht's why people often talk about NVIDIA launching the 4090 Ti that would be this 600W GPU with the full die if AMD ever launches the 7950XTX which is also a possibility.

Thing is apparently AMD is having a lot of toruble with their current architecture, so that will probably never happen.

But also, NVIDIA seems to be shifting strategy, remember Jensen said he plans to go back to releasing a new architecture every year? Now that may only affect the AI side of thigns but it's still true that NVIDIA always developed for business, they always made their architectures for QUADRO GPUs and then just used the same for GTX GPUs, and now they are developing for AI and then using the same architecture for everything else.

So, there is a clear possibility that they will just forget about what AMD is doing and just do the best they can regardless if AMD has an answer to that or not. Altough I can't see them doing this for very long, they will probably eventualy end up making the AI chips way way more powerful than consumer chips and literally just give us the scraps and leftovers of the AI chips.

Matter of fact is that the 4090 is massively more powerful thant he 4080 and the 4080 massively more powerful thatn the 4070. I see that as an indication of NVIDIA not caring about what makes sense anymore, they made a total overkill GPU in the form of the 4090, not caring about power consumption.

A few years back NVIDIA would aim their top of the line GPU to 250W and that was limiting what they could achieve. Now they just don't care, they are offering the product if you can pay for it, the PSU and the electrical bill for it, you can have it. And they might double-down on this idea for next generation, in fact they probably must, since they need to offer a meaningful upgrade to the 4090, so yeah, possibly higher TDP, but it can still be more efficient besides drawing more power. It all comes down to fps/watt.
 
Doesn't look like the same situation for me, the alleged 600W 4090 could always be made, it's just that AMD barely competes with the 4080 so NVIDIA doesn't have any incentive to make the 4090 draw more power than it curretly does and taht's why people often talk about NVIDIA launching the 4090 Ti that would be this 600W GPU with the full die if AMD ever launches the 7950XTX which is also a possibility.

Thing is apparently AMD is having a lot of toruble with their current architecture, so that will probably never happen.

But also, NVIDIA seems to be shifting strategy, remember Jensen said he plans to go back to releasing a new architecture every year? Now that may only affect the AI side of thigns but it's still true that NVIDIA always developed for business, they always made their architectures for QUADRO GPUs and then just used the same for GTX GPUs, and now they are developing for AI and then using the same architecture for everything else.

So, there is a clear possibility that they will just forget about what AMD is doing and just do the best they can regardless if AMD has an answer to that or not. Altough I can't see them doing this for very long, they will probably eventualy end up making the AI chips way way more powerful than consumer chips and literally just give us the scraps and leftovers of the AI chips.

Matter of fact is that the 4090 is massively more powerful thant he 4080 and the 4080 massively more powerful thatn the 4070. I see that as an indication of NVIDIA not caring about what makes sense anymore, they made a total overkill GPU in the form of the 4090, not caring about power consumption.

A few years back NVIDIA would aim their top of the line GPU to 250W and that was limiting what they could achieve. Now they just don't care, they are offering the product if you can pay for it, the PSU and the electrical bill for it, you can have it. And they might double-down on this idea for next generation, in fact they probably must, since they need to offer a meaningful upgrade to the 4090, so yeah, possibly higher TDP, but it can still be more efficient besides drawing more power. It all comes down to fps/watt.

Regarding the higher power consumption, what's the major issue with this? I don't see why Nvidia shouldn't produce cards like the 4090. I think they're awesome in the same way that a stupidly powerful sports car is awesome. I have no use for it, but someone does.
 
Regarding the higher power consumption, what's the major issue with this? I don't see why Nvidia shouldn't produce cards like the 4090. I think they're awesome in the same way that a stupidly powerful sports car is awesome. I have no use for it, but someone does.

The problem is requiring upgrading consumers to also buy a new PSU along with the GPU, plus all the risks that NVIDIA accepted and that did blow up in their face (as in fires happening thanks to a lack of experience dealing with such high power requirements). Also there's a public perception thing about efficiency. As everyone and their grandmothers think that the 4090 has terrible efficiency when it's actually amazing efficiency, but people can't do math, they see higher power, they think it's just not efficient.

These are the reasons that NVIDIA would purposefully target a 250W TDP for all their top of the line cards in previous generations!

I'm not saying it's bad. I'm all for doing the absolute best possible and there will always be people willing to pay for it. It's just that these issues does exist and NVIDIA is aware of them. But they don't care anymore. And I actually have no issue with that so long as they are taking responsibility for whatever issues this might cause (which they eventually did).

The problem I have with NVIDIA's current lineup is just the massive difference in performance between 4090, 4080 and 4070, which ended up making the 4060 not really faster than the 3060... That just sucks... And again, comes back to them avoiding the lower-end users needing to replace their PSU as they are the most likely to not have money for that, altough they did miss their aim and went far beyond the target with this one, while trying to claim thar DLSS3 would still make it 2x faster than 3060...

To me, the fact that they launched the 4090 as it is and then went back on their decision by massively scaling back lower-end GPUs to the point they're barely any faster than their past generation shows a lack of faith in their own decision and also kinda misleading as they come in with a massively better GPU and makes a large fuss about it and reviews confirm it, and then just uses a software trick to make the lower-end GPUs seem faster, if you didn't look mcuh into it you might have fallen for it since you had seen how much better the 4090 was and it jsut makes sense that repeats alogn the product stack.
 
Last edited:
The problem is requiring upgrading consumers to also buy a new PSU along with the GPU, plus all the risks that NVIDIA accepted and that did blow up in their face (as in fires happening thanks to a lack of experience dealing with such high power requirements). Also there's a public perception thing about efficiency. As everyone and their grandmothers think that the 4090 has terrible efficiency when it's actually amazing efficiency, but people can't do math, they see higher power, they think it's just not efficient.

These are the reasons that NVIDIA would purposefully target a 250W TDP for all their top of the line cards in previous generations!

I'm not saying it's bad. I'm all for doing the absolute best possible and there will always be people willing to pay for it. It's just that these issues does exist and NVIDIA is aware of them. But they don't care anymore. And I actually have no issue with that so long as they are taking responsibility for whatever issues this might cause (which they eventually did).

The problem I have with NVIDIA's current lineup is just the massive difference in performance between 4090, 4080 and 4070, which ended up making the 4060 not really faster than the 3060... That just sucks... And again, comes back to them avoiding the lower-end users needing to replace their PSU as they are the most likely to not have money for that, altough they did miss their aim and went far beyond the target with this one, while trying to claim thar DLSS3 would still make it 2x faster than 3060...

To me, the fact that they launched the 4090 as it is and then went back on their decision by massively scaling back lower-end GPUs to the point they're barely any faster than their past generation shows a lack of faith in their own decision and also kinda misleading as they come in with a massively better GPU and makes a large fuss about it and reviews confirm it, and then just uses a software trick to make the lower-end GPUs seem faster, if you didn't look mcuh into it you might have fallen for it since you had seen how much better the 4090 was and it jsut makes sense that repeats alogn the product stack.

Yeah, I can understand all your points. I agree with you.
 
Back
Top