Nvidia RTX 2060 Review

I think people do see the facts. I think they don't care enough. But it's like anything in society. People don't care if their favourite popstar is a horrible person. People don't care if their attachment to meat destroys the earth. People don't care if their kids are bullies. As long as they get what they want, that's it. Obviously that's a sweeping generalisation, but it's kinda true.

It just makes me laugh how quickly people do U turns like nothing happened. One minute they are making up all excuses under the sun, then taking out their pitchforks to lynch Intel for ripping them off, which they totally allowed them to do lmao.

It's kinda like that South Park ep.. "This way Wall Mart Comes", IIRC. The town is all upset and peed off because all of their local stores have closed down, yet the problem is obviously them being tight asses and shopping at the cheapest place.

It turns out the soul of the Walmart store is a mirror.....

Ahh, good old human nature.
 
These aren't facts though, all the conclusions you've came to are based on conjecture, your comparisons are purely on marketing names, and marketing departments are often the least technically wise part of a tech company you can get, if you look at it on a mm^2 basis, Turing is the same price as Pascal, there's a fact for you. PS. At no point have I ever implied NVidia's pricing is in any way reasonable (Technically justifiable and logical != reasonable, for reference I havn't owned an NVidia card since I got given a 780).

Another one: NVidia spent more R&D money on Turing than any GPU manufacturer has ever spent on any GPU architecture, ever.
Or maybe: Turing dies are larger than their Fermi equivalents were [GF100= 529, GF104 = 331, GF106 = 240 whereas TU102 = 754, TU104=545, TU106 = 445]

NVidia's pricing has been fairly consistent in terms of GPU code names(Not marketing names) or die size, maybe marketing material or positioning would be different if AMD were competetive, but pricing won't budge as long as NVidia maintain mindshare, attempting to lay the onus for NVidia's pricing on AMD is neither useful nor does it fit into the timescales tech companies work on or how competition actually works in a duopoly. Look at Intel.

What has Intel done since AMD was competitive? Raised prices, across the board, at every marketing tier. Kept per mm^2 pricing exactly the same.

I'm not trying to argue for value or anything like that, my arguments re: Turing has always been that it's badly placed for the consumer market, in fact borderline completely irrelevant due to its price, but the pricing does have technological precedent. As I've said before, if AMD were competitive I don't think Turing would have ever reached consumers given how the smallest Turing die is the same size as Pascal's Titan's die, I honestly think we'd just have nothing instead, and I have no comment regarding a preference of the two personally but I think having DXR hardware in consumer hands is a very useful thing for software developers.
 
Last edited:
These aren't facts though, all the conclusions you've came to are based on conjecture, your comparisons are purely on marketing names, and marketing departments are often the least technically wise part of a tech company you can get, if you look at it on a mm^2 basis, Turing is the same price as Pascal, there's a fact for you. PS. At no point have I ever implied NVidia's pricing is in any way reasonable (Technically justifiable and logical != reasonable, for reference I havn't owned an NVidia card since I got given a 780).

Another one: NVidia spent more R&D money on Turing than any GPU manufacturer has ever spent on any GPU architecture, ever.
Or maybe: Turing dies are larger than their Fermi equivalents were [GF100= 529, GF104 = 331, GF106 = 240 whereas TU102 = 754, TU104=545, TU106 = 445]

NVidia's pricing has been fairly consistent in terms of GPU code names(Not marketing names) or die size, maybe marketing material or positioning would be different if AMD were competetive, but pricing won't budge as long as NVidia maintain mindshare, attempting to lay the onus for NVidia's pricing on AMD is neither useful nor does it fit into the timescales tech companies work on or how competition actually works in a duopoly. Look at Intel.

What has Intel done since AMD was competitive? Raised prices, across the board, at every marketing tier. Kept per mm^2 pricing exactly the same.

I'm not trying to argue for value or anything like that, my arguments re: Turing has always been that it's badly placed for the consumer market, in fact borderline completely irrelevant due to its price, but the pricing does have technological precedent. As I've said before, if AMD were competitive I don't think Turing would have ever reached consumers given how the smallest Turing die is the same size as Pascal's Titan's die, I honestly think we'd just have nothing instead, and I have no comment regarding a preference of the two personally but I think having DXR hardware in consumer hands is a very useful thing for software developers.

Fermi was badly placed too. And Nvidia soon realised it when they started losing money hand over fist.

Now what AMD should have done was look at what Nvidia did. Drop the core size and count, massively increase the speed and etc. But they did not, even after losing several more rounds afterwards.

I can absolutely bet that if Navi comes along at 1080 (or 2060 on a good day) performance at £250? you can bet that this card will drop to £250 overnight.

In fact, I would almost bet my entire PC collection that this card would cost £200-£250 if it were not for Polaris being nowhere near it.

I reiterate, Fermi cost Nvidia an absolute fortune but when it performs on par with a £300 AMD part? you need to drop the price. It really is as simple as that. Just like when the GTX 280 launched for £330 or so, with a really nice back plate etc. I bet my house that would not have happened if it were not for the 4870 and 4890.

So yes, all of this talk and etc? really boils down to something far more simple.

Oh and BTW whilst I'm having a good moan.

People, please, excusing the prices of these cards because they perform the same as cards at the same price in the previous gen is as daft as the line in my sig. Get real ! you are supposed to get more performance at the same price as the old gen that is called progress. Not stalling the market selling the same for, you guessed it ! the same. It's not supposed to work like that. It's supposed to be an upgrade £ per perf, not the friggin same !!!!
 
Last edited:
These aren't facts though, all the conclusions you've came to are based on conjecture, your comparisons are purely on marketing names, and marketing departments are often the least technically wise part of a tech company you can get, if you look at it on a mm^2 basis, Turing is the same price as Pascal, there's a fact for you. PS. At no point have I ever implied NVidia's pricing is in any way reasonable (Technically justifiable and logical != reasonable, for reference I havn't owned an NVidia card since I got given a 780).

Another one: NVidia spent more R&D money on Turing than any GPU manufacturer has ever spent on any GPU architecture, ever.
Or maybe: Turing dies are larger than their Fermi equivalents were [GF100= 529, GF104 = 331, GF106 = 240 whereas TU102 = 754, TU104=545, TU106 = 445]

NVidia's pricing has been fairly consistent in terms of GPU code names(Not marketing names) or die size, maybe marketing material or positioning would be different if AMD were competetive, but pricing won't budge as long as NVidia maintain mindshare, attempting to lay the onus for NVidia's pricing on AMD is neither useful nor does it fit into the timescales tech companies work on or how competition actually works in a duopoly. Look at Intel.

What has Intel done since AMD was competitive? Raised prices, across the board, at every marketing tier. Kept per mm^2 pricing exactly the same.

I'm not trying to argue for value or anything like that, my arguments re: Turing has always been that it's badly placed for the consumer market, in fact borderline completely irrelevant due to its price, but the pricing does have technological precedent. As I've said before, if AMD were competitive I don't think Turing would have ever reached consumers given how the smallest Turing die is the same size as Pascal's Titan's die, I honestly think we'd just have nothing instead, and I have no comment regarding a preference of the two personally but I think having DXR hardware in consumer hands is a very useful thing for software developers.

Am I missing something? The Titan Xp had 12 billion transistors on a 471mm^2 sie. Turing RTX Titan has 18.6 on a 754mm^2 chip. It uses TSMC's 12nm which is just a refinement of their 16nm.
 
Yes, it boils down to the fact you're roughly getting very slightly less than a 1080Ti in die size and a 1080 in performance at around half the price less than 2 years later without a change in node.

Polaris or no Polaris, I doubt you're ever getting something like a 450mm^2 die at £200 from Nvidia again, they're too big for that now. And remember, the more transistors you can pack in each mm^2, the more expensive it costs to create each mm^2 from an R&D perspective.

Edit: You're missing nothing goldfish, am I missing something?
 
Yes, it boils down to the fact you're roughly getting very slightly less than a 1080Ti in die size and a 1080 in performance at around half the price less than 2 years later without a change in node.

Polaris or no Polaris, I doubt you're ever getting something like a 450mm^2 die at £200 from Nvidia again, they're too big for that now. And remember, the more transistors you can pack in each mm^2, the more expensive it costs to create each mm^2 from an R&D perspective.

Edit: You're missing nothing goldfish, am I missing something?

The only reason Nvidia cut die size was because like I said elsewhere, at that time it wasn't needed. Games then wanted clock speed over cores (see also CPUs). However, Nvidia had to retaliate to Vega with a proper DX12 based tech. Which is, from what I can gather, what Turing is. Back to the kitchen sink basically.

You are probably right though, I don't think we will ever see big old alu clad honking cores again. If AMD had done what they should have done (and what Raja was doing, making smaller higher clocked cards for cheap and selling them well) then maybe they could have caught up, and, pushed Nvidia into making mammoth sized dies. However, they haven't, and have just made one mistake after another.

I don't know what is in store for RTG in the future, but if it's more Vega they should just cut their losses and call it a day. It would be a shame to waste all of that cash made by Ryzen on toss GPUs.
 
Yes, it boils down to the fact you're roughly getting very slightly less than a 1080Ti in die size and a 1080 in performance at around half the price less than 2 years later without a change in node.

Polaris or no Polaris, I doubt you're ever getting something like a 450mm^2 die at £200 from Nvidia again, they're too big for that now. And remember, the more transistors you can pack in each mm^2, the more expensive it costs to create each mm^2 from an R&D perspective.

Edit: You're missing nothing goldfish, am I missing something?

Sorry, I misread your comment. You said, "I don't think Turing would have ever reached consumers given how the smallest Turing die is the same size as Pascal's Titan's die". But I read that as, "I don't think Turing would have ever reached consumers given how the Turing die is the same size as Pascal's Titan's die". Very different :p
 
Nah that technical explanation doesn't hold up either, CPUs and GPUs scale in very different ways, most GPU calculations are what is known as "embarrassingly parallel" IE to scale to as many cores as you can feed them, it's not theoretically true of course but when you're performing the same operation millions of times simultaneously with each one independent of the other it practically is. Obviously in a practical design things like coherency start to play limiting rolls in excessive scailing but this shouldnt be compared at all to CPU scailing which generally has finite, measurable, hard theoretical maximums of performance scailing that you can predict with tools like Amdahls Law.

AMD only went wide at 14nm because they had to use 14nm LPP so it was impossible to chase clocks, GCN in itself isn't designed to be a wide architecture, in fact at 64CUs it hits a lot of inherent limits, GCN really shines when it's in like 8-16 CUs in an APU sipping power while delivering reasonable perf. It becomes a power hog at high CU counts.

Turing is large partly because it does more stuff, they've crammed more types of execution units(A lot more than just the Tensor & RT units) into each SM(The %age diversity exasperated by the use of 64FPU SMs as opposed to Pascals 128), and they need more of each SM to have more useful amount of the smaller %age stuff. Turing is not a good architecture for perf/mm^2 at all, in most cases atm it's a sea of dead silicon. Turing is literally, mathematically, by design, bad value (With regards to traditional shader perf). I honestly severely doubt they make as much from it as they did with Pascal, and their finances now seem to indicate that.

But, yeah AMD has kinda been doing this whole thing for much longer, making architectures with lots of silicon that most contemporary software at launch doesn't make great use of, and by the time it does the chips that led with the technology is possibly already outdated. But, you could say it's somewhat necessary for a company like AMD to be doing that, because they don't have the resources to be making different dies with vastly different sets of resource allocations for different markets like NVidia has since Kepler, so they have to push to make every mm^2 of their architecture count in some uses.
 
Last edited:
I know, I'm just pointing out that the GTX X60 series graphics cards have been creeping up in price for a long time. Perhaps I could have been more explicit.

TBH people are focusing too much on "what XX SKU costs that much? But XX-1 SKU was less expensive". It's a silly argument. Yes, I'd like the RTX 2060 to be cheaper, who wouldn't, but for the performance it offers it isn't by any means a bad buy.

Spot on Mark.

Just came across this review and I was thinking my 980s are getting on a bit and only have 4gb Ram each (yeah so 4 total) so this may be a good way to go for my higher resolution requirements.

To put it in comparison - My 980s (flagship of the time, pre-Ti) were $800 AUD each back in 2014. This 2060 is $200 cheaper but has reasonably comparative frame rates (the only benchmark you still run is Valley and the 2060 scored a nice 10+fps better at each res.) It's much cooler and draws 50w less from the wall. Of course there are other gains around CPU/Mobo/RAM efficiency as well but for the most part it's still apples and apples.
Comparison here - https://www.overclock3d.net/reviews/gpu_displays/asus_gtx980_strix_review/4

So yes I get people are saying the mid range is creeping in price - but don't forget it's absolutely moving upwards in efficiency and performance as well. An xx60 is not always comparable to an x60. You have to look at the whole package Price and performance = value.

For me I'm better off getting a 2070 or just smashing on with my 9's until NV drop pricing on the 2080/Ti's. Still getting pretty decent frames at 3440*1440 so no hurry yet.
 
Really I do think people are jumping the gun here. Right now sure it's fine. But what if they release the 2050 and it's a 1060 3GB performer for $250-300? That's not worth it at all. It would cost more than the 1060 3GB card and pretty much make it a no brainier to buy a 2060, meaning better sales for Nvidia.
Really it depends on the product stack compared to old gen vs new gen. Since Nvidia keep raising the price it no longer just makes sense to compare xx60 to xx60. It's now based off price brackets. Which is a smart marketing move on there part and let's them get away with increasing price every Gen (by moving the top tier up in price everything below gets bumped too) and people seem to accept that but a different argument.
 
Because the RTX 2060 performs on par with a GTX 1080 but is cheaper. That's not something you can say in the rest of the stack. I'm not comparing 2060 to 1060/960/760/etc, I'm comparing the 2060 to its closest performing card.
I get that, but you should not be comparing it to the 2080 that is like comparing a Fiat 500 to a Maserati.
Products should be compared to other products that are either in the same target space or previous gen any other comparison is just ludicrous because it give a false picture.
 
I get that, but you should not be comparing it to the 2080 that is like comparing a Fiat 500 to a Maserati.
Products should be compared to other products that are either in the same target space or previous gen any other comparison is just ludicrous because it give a false picture.

Why should they? Who decided that? It seems somewhat arbitrary to follow it religiously as if no other factors are involved. The RTX 2060 is a bigger chip than the GTX 1080 with more advanced features. Obviously every new generation has 'advanced features', but Turing is notably advanced. It's a pioneering architecture, more of a revolution than an evolution. The comparison makes a lot of sense to me.
 
Why should they? Who decided that? It seems somewhat arbitrary to follow it religiously as if no other factors are involved. The RTX 2060 is a bigger chip than the GTX 1080 with more advanced features. Obviously every new generation has 'advanced features', but Turing is notably advanced. It's a pioneering architecture, more of a revolution than an evolution. The comparison makes a lot of sense to me.

Because that just plays into nvidias hands and justifies them and price creep policy.
It is a low-mid range card so should have mid range pricing
 
Because that just plays into nvidias hands and justifies them and price creep policy.
It is a low-mid range card so should have mid range pricing

That's not entirely true. The reason for the high price is not solely because people have accepted and justified from all sides a price increase. I don't like the pricing at all. But by me saying that Turing is a more expensive architecture to manufacture and Nvidia currently having no competition is an objective truth. It doesn't justify the price from a moral standpoint, from the standpoint of a consumer, but it helps explain why prices have increased. Not only that but prices fluctuate every year, and in some cases they go down. Compare the 780Ti and 780 to the 980Ti and 980. Competition was stronger at the time of the 780Ti and 780 yet they were very expensive. It was still on the 28nm process and didn't have very much VRAM. The 980Ti came out and totally undercut the Fury X while having more VRAM and better efficiency. AMD were less competitive with Fiji against Maxwell than they were with the R9 200 series against the GTX 700 series, yet the 700 series preceded the 900 series.

Ultimately, to me it's about comparing features, die sizes, specifications, performance, efficiency, etc. over comparing market target which is somewhat more vague, or chip numbers which is just an arbitrary nomenclature. Everyone compared the GTX 970 to the 780 and 780Ti. Should I get a 970 or a new old stock 780? Which is better? The 680 or the 770? It was the same chip but a different market. The comparison was perfectly legitimate. Was it always fair? No, not necessarily, but it was one way of comparing things out of a multitude.
 
The reason for the price is all down to market competition.


Take the 1080P Unigine Valley graph as an example - the 2060 sits right in the middle of the graph. Every card below it is an AMD card. Every card above it is an Nvidia card.


The closest competitor is the Vega RX64, and looking at prices right now the cheapest Vega64 is the MSI reference model card at £395 from Scan.


The cheapest 2060 is a tie between the Gigabyte Mini ITX model and the MSI Ventus at £340, both from Novatech.


So, Nvidia have bought out a card that performs better than the best card AMD have to offer, with a lot more features, for a cheaper price.


These are the facts of the argument. Everything else is just biased opinion.
 
The obvious flaw in that argument being that beyond benchmarks that only really holds at 1080p in games with lower memory/bandwidth requirements, at 4K/high res textures/ect the RTX2060 trades blows with V56 in most real world games, which you can find vastly cheaper, while for 1080p you could argue a £350 card is kinda overkill.
 
The reason for the price is all down to market competition.


Take the 1080P Unigine Valley graph as an example - the 2060 sits right in the middle of the graph. Every card below it is an AMD card. Every card above it is an Nvidia card.


The closest competitor is the Vega RX64, and looking at prices right now the cheapest Vega64 is the MSI reference model card at £395 from Scan.


The cheapest 2060 is a tie between the Gigabyte Mini ITX model and the MSI Ventus at £340, both from Novatech.


So, Nvidia have bought out a card that performs better than the best card AMD have to offer, with a lot more features, for a cheaper price.


These are the facts of the argument. Everything else is just biased opinion.

So an architecture costing objectively more to design and produce is a biased opinion?
 
Back
Top