Nvidia RTX 2060 Review

So an architecture costing objectively more to design and produce is a biased opinion?

Can you prove it costs more to design? Pascal cost Nvidia $2 billion of just R&D. Considering Nvidia already had a bunch of development done for AI research it was just a matter of stitching together really
 
Can you prove it costs more to design? Pascal cost Nvidia $2 billion of just R&D. Considering Nvidia already had a bunch of development done for AI research it was just a matter of stitching together really

No. Like we've never been able to prove it. Companies don't usually release exact numbers, and rarely can a third party examiner study and categorically prove what something cost to design. All we have are educated estimates. And that's still objective. I think of objective as something that has been studied to the best of one's ability and is said from the perspective and impartiality. To prove empirically what Turing cost to design and manufacture is impossible. But that's not the point.
 
The obvious flaw in that argument being that beyond benchmarks that only really holds at 1080p in games with lower memory/bandwidth requirements, at 4K/high res textures/ect the RTX2060 trades blows with V56 in most real world games, which you can find vastly cheaper, while for 1080p you could argue a £350 card is kinda overkill.


It's a 1080P card, or at a push a 1440P card. Nobody in their right mind would buy a 2060 to game at 4K. So considering the card is squarely aimed at the 1080P / 1440P market, it's not really a flaw. Deciding which card is best to play 4k games at 30fps is an exercise in futility.
 
Last edited:
I really can't see how any technology Nvidia have worked on could cost as much as, or more than, Fermi.

What is annoying me is how they are taking the pee when it comes to stuff that doesn't matter. Like the cooler on the 2060 FE. It has about a hundred screws in it, and is over engineered to the point of LOLs. Some one has got to be paying for that, right? and it sure as heck isn't Nvidia.

I would also hazard a bet and say that Nvidia have warned board partners not to under cut the FE by anything if at all, and, how Nvidia are now selling cards they made for that extra bit of profit. It all just smacks of terrible greed.
 
R&D cost always scales with design complexity, and design complexity scales with transistor count, Turing are NVidia's largest/highest transistor count/widest variety of processing units consumers dies by a long way in most counts.

NVidia certainly hasn't been tightening their belts (Now $2Bn+ R&D over last 4 quarters alone, while Pascal's must have been over closer to or maybe more than 4 years, though of course their silicon has a far wider variety of use cases now & some are quite R&D heavy alone, but their GPU cores still lay the foundation for those, but the fact remains they now burn twice the amount on R&D per quarter as they did upto & before Pascal.)
09f7852b845a84feb5361ae8a2977a53.png
 
Last edited:
I mean that post specifically talked about cost of R&D so I'm not sure what your point was then
*Provides NVidia's R&D expenses for "software engineering, including efforts related to the development of our CUDA platform, hardware engineering related to our GPUs, Tegra processors, and systems, very large scale integration design engineering, process engineering, architecture and algorithms." per quarter for over a decade, showing a higher rise in R&D expenses than any other public tech company, and by far the highest in their history*
That graph literally has nothing to do with anything.
what?
This is literally the empirical data you were asking for.
 
Last edited:
I mean that post specifically talked about cost of R&D so I'm not sure what your point was then

My point was what I said. Davva2004 said everything other than the lack of competition is a biased opinion. I disagreed by asking, if an architecture costs objectively more (based on what we know) is it really biased to say that it has also impacted prices? Your point about AI already being developed doesn't refute the fact that it still costs money. Whether the consumer absorbs that costs with Pascal, Volta, Turing or future architectures comes down to other factors. Ultimately the consumer still has to pay for that development. Nvidia saw an opening in the market to release an outlier to push the boundaries and help usher in something new. This was their chance to pass on much of their expenses to the consumer. Chips are massive, technology is advanced, market is dominated, Moore's Law is creeping in, taxes are high, DRAM prices are high, GF are no longer invested in 7nm, to me that all points to current Turing. That's just my interpretation.
 
Back
Top