RTX 2060 Pricing and Performance Leaked

So basically the more RTX DICE cut out of BFV the better it runs. Who'da thunk it eh?

I would worry that in the end the difference will be indiscernible.
 
If price and performance are true, that's certainly good value. It also makes me look forward to the 1160.
 
I feel having RT on this card is just a wrong promise and thus a very bad idea.

We'll see I guess dude. If they crack DLSS* and RT a bit then it may just scrape through.

* I would just turn Aliasing off myself, but whatevs.

The big test? Metro Exodus. They called it a "GPU cooker" so I will be interested to see if they cut it back or left it as it was then.

Price for this? too much. 1070ish performance for the same price the 1070 launched at. Or rather, should have launched at. Either way it's still pretty much same price for last series performance.

I really hope Navi is OK.
 
1070ish performance for the same price the 1070 launched at.


Between 1070 Ti and 1080 doesn't sound like '1070ish performance' to me. A 1070 Ti is still considerably more expensive than $350 and will be slower. So not only will you be getting (assuming this leak is true) better performance than the 1070 Ti for a lower price, you're also getting new tech.


If the 1160 is a 2060 without RT and at a lower price, that would be even better value (assuming you don't want RT).
 
Between 1070 Ti and 1080 doesn't sound like '1070ish performance' to me. A 1070 Ti is still considerably more expensive than $350 and will be slower. So not only will you be getting (assuming this leak is true) better performance than the 1070 Ti for a lower price, you're also getting new tech.


If the 1160 is a 2060 without RT and at a lower price, that would be even better value (assuming you don't want RT).

I will believe the performance when we actually get it. Right now? it would be bucking a trend and sounds a bit too good to be true.

Ed, I also don't see any SLi or NVlink connectors. Shame.

Just had a look, and a think. If this card does perform as rumoured (1070ti-1080) then that makes it a 1440p card. This means that the 4gb version (or less) will seriously struggle, as my Fury X did.

If it is a 6gb version for the rumoured price of £350 then it could actually be a great card to buy. I still, however, think it is too expensive, especially when looking back at the launched and then revised price of the 1060 6gb. What I mean is, when AMD released the 570 and 580 and before the mining boom it was around £250 or so which I still thought was too much, given that at launch the 970 (which it replaced in performance terms) was £240 or so.

That is why I find it hard to believe it will be any faster or cheaper than the 1070.

Any way, waffle over, I've put forward my thoughts so we'll see how it actually pans out :)
 
Last edited:
Going from the TFLOPs & clock speeds(Within 5% of 2070) of this FE against the RTX2070 FE, it seems like this is coming in at ~1950 cores(Around 15% less), so it'd make sense that RT performance(Which shouldn't have much of an impact from the much larger %age cut to memory+bandwidth) isn't too far from the RTX2070. This makes a lot of sense given the different between the other two Turing cards cut from the same die, the RTX2080Ti & Titan(Yields can't be that bad with such a mature node, especially as we get down to more sensible die sizes), as well as with keeping the Tensor & RT performance up while still differentiating in texture/shader performance with the bandwidth & memory reduction(Which nowadays is probably worth a lot more in terms of total board BOM cost).

This would also bring it closer the GTX1070's CUDA count, a trend we've seen with previous Turing cards(CUDA counts closer to the 10X0 tier above).
 
Last edited:
Going from the TFLOPs & clock speeds(Within 5% of 2070) of this FE against the RTX2070 FE, it seems like this is coming in at ~1960 cores(Around 15% less), so it'd make sense that RT performance(Which shouldn't have much of an impact from the much larger %age cut to memory+bandwidth) isn't too far from the RTX2070. This makes a lot of sense given the different between the other two Turing cards cut from the same die, the RTX2080Ti & Titan(Yields can't be that bad with such a mature node, especially as we get down to more sensible die sizes), as well as with keeping the Tensor & RT performance up while still differentiating in texture/shader performance with the bandwidth & memory reduction(Which nowadays is probably worth a lot more in terms of total board BOM cost).

As I said we'll see I suppose. Right now? it doesn't seem to carry any caveats at all, which is unheard of for a GPU in modern times. Whilst I would like this positive BS era we lived in *if* things ever lived up to the hype I must say I am rather jaded with the entire "thing" from top to bottom. The industry, the con merchants (cough ram and SSD cough cough) and the bent "reviews" on YT which are nothing but advertisements, smashed out by one of their drones.

The sad part is that the 2060 could have been a legend. You know? kinda like the GTS 450, 550, GTX 460 etc. You could have combined two, and given the £1500 card a run for its money. Sadly Nvidia stopped all of that fun. If they really cared they would embrace devs to get the support in, and leave the connectors be on smaller cards. I used to love using two low/mid range cards to get bleeding edge top end performance.

It's sad how politics within the industry and cash have completely hobbled it.

I am really hoping that Navi and more importantly "Scaleability" actually mean something, other than a word that doesn't seem to exist by Google spelling terms at least :D

I can just see a trio of mid sized cards all water blocked up and raring to go :D
 
A big part of the reason why AFR(The only multi-GPU technology you can easily and universally implement withoutt active support from devs) is kinda dead(Particularly for low end cards) is because people started to realise that frame latency was just as important to smoothness as frame rate. Two cards using AFR to get upto ~x1.9 frame rate boosts would generally have the same or worse frame latency as if only one of the cards was turned on. It's not really an issue if you can already get 60+fps on one card and you're using it to get to ~120+fps, but if you're using it for a game running at 20fps on one card to get to playable framerates then you're looking at ~50ms frame latencies(Before you even get into spikes, bad timing & microstutter), which is more of a delay than many modern internet connections bring in(And almost impossible to compensate for).
 
Not putting my money on AMD in the foreseeable future, I'm hoping Intel can poke Nvidia in both eyes and make them their way ;)

From what I have seen expect Polaris from Intel. Maybe faster, as it may be shrunk.

A big part of the reason why AFR(The only multi-GPU technology you can easily and universally implement withoutt active support from devs) is kinda dead(Particularly for low end cards) is because people started to realise that frame latency was just as important to smoothness as frame rate. Two cards using AFR to get upto ~x1.9 frame rate boosts would generally have the same or worse frame latency as if only one of the cards was turned on. It's not really an issue if you can already get 60+fps on one card and you're using it to get to ~120+fps, but if you're using it for a game running at 20fps on one card to get to playable framerates then you're looking at ~50ms frame latencies, which is more of a delay than many modern internet connections bring in(And almost impossible to compensate for).

Yeah they need to do it differently, that's all. Like, two CPU cores are better than one, even if there is more latency.

AMD are definitely onto something with Infinity Fabric. Yes, it does cause latency but it just hammers through that with fast RAM to get to the end. I am hopeful they can do something similar with GPUs. That would be incredible, and would not need any special drivers and what not.
 
Yeah multi chip GPUs are definitely going to be a thing very soon, but I'm not sure that'll extend to multi-card configs for a while(And probably not even multi-module/interposer for GPUs), more multiple chips on one interposer with the amount of bandwidth required between them to be able to work relatively invisibly as a single unit. If we could maintain the same bandwidth over a larger distance it'd definitely be useful(Getting two very hot things further away from each other with individual cooling obviously has many benefits) so maybe multi-card will have a revival once we have optical links or very wide high bandwidth links or something like that(Infinity fabric certainly can scale beyond the socket/interposer as we know from Epyc at least, but that has much lower bandwidth requirements & still requires very expensive purpose built fixed links within the motherboard).
 
Yeah from the sounds of it Intel's GPU is using similar concepts to Raja's AMD GPU's, it's looking like a primary compute focussed card with features to match(Large amounts of ECC HBM, AI/ML instructions, single slot cooler(From the silhouette they unveiled), likely sub 150W, possibly closer to 75W for the first models, so Polaris tier gaming performance probably isn't crazy for their first GPU. Presumably it's going to launch in the high memory configurations with all database/server features first to selected partners/customers, with some prosumer & then consumer versions(But still probably sold mostly for their compute/CAD kinda stuff first) following.

But if AMD is the first to market in the GPU space with MCMs(And they've shown with Epyc they can pull it off) then that would make large GPUs far cheaper to manufacture, especially if those large GPUs were already on an interposer to connect to memory anyway.
 
Last edited:
Yeah from the sounds of it Intel's GPU is using similar concepts to Raja's AMD GPU's, it's looking like a primary compute focussed card with features to match(Large amounts of ECC HBM, AI/ML instructions, single slot cooler(From the silhouette they unveiled), likely sub 150W, possibly closer to 75W for the first models, so Polaris tier gaming performance probably isn't crazy for their first GPU. Presumably it's going to launch in the high memory configurations with all database/server features first to selected partners/customers, with some prosumer & then consumer versions(But still probably sold mostly for their compute/CAD kinda stuff first) following.

By everything they have tried in the past it is bound to be a huge success. (like every other one they tried to make was a bust)

Raja is very frugal as a person, and really understands how to make a decent product at a low price that people can afford. You will always win if you work like that.
 
None of Intel's GPU projects have really been a bust, the only other dedicated GPU project was Larrabee which directly led to (The project was essentially renamed & refined in targeting) the hugely successful Xeon Phi range of coprocessors, that had dominated the super computing world across many years. Besides that their only GPUs have been tiny iGPUs never realistically intended to or capable of competing against dGPUs and so never given much focus for heavier use cases like gaming, they did what they had to though.

I don't expect them to jump from making ~10W iGPU parts straight to several hundred watt powerhouses(Creating high power architectures that are still reasonably efficient is a big challenge, especially on low power focussed nodes), the architecture will have to scale up gradually, which might not lend itself massively favourable to high end gaming focus for a couple of generations.
 
Last edited:
They are mid range dude. Definitely not high end.

They will also be massively over inflated prices, just for the Intel gang.

Did you really just ruin one of my last hopes in 2018? :D you've clout though so you're more than likely right. Which leaves Nvidia out of control (unless we stop buying their trinkets), including the 2060 price here.

I'd only buy into 1st gen RTX as a last resort; you know, my GPU would break and there's nothing else to replace it with. I am seriously noticing the lack of performance in modern games with my trusty card though xejete in for example odyssey I have to turn down most settings quite a bit - and it'll only get worse. Still, whereas normally that'd be an incentive for me to buy into the new, today it's not whatsoever :) and likely we'll see better refreshes on 7nm in 2019 anyway. Zero lifespan for RTX 2000.
 
Last edited:
Did you really just ruin one of my last hopes in 2018? :D you've clout though so you're more than likely right. Which leaves Nvidia out of control (unless we stop buying their trinkets), including the 2060 price here.

I'd only buy into 1st gen RTX as a last resort; you know, my GPU would break and there's nothing else to replace it with. I am seriously noticing the lack of performance in modern games with my trusty card though xejete in for example odyssey I have to turn down most settings quite a bit - and it'll only get worse. Still, whereas normally that'd be an incentive for me to buy into the new, today it's not whatsoever :) and likely we'll see better refreshes on 7nm in 2019 anyway. Zero lifespan for RTX 2000.


Oh I could be wrong. No one has told me anything about it, but I am just going on the bloke who they stole, I mean hired, to do the job.

I just can't see Intel coming up with Nvidia beating GPUs in less than two years. I would expect them to be like Polaris, given who is behind it :)
 
Back
Top