Go Back   OC3D Forums > [OC3D] General Forums > OC3D Reviews & Videos
Reply
 
Thread Tools Display Modes
 
  #21  
Old 14-01-19, 10:00 PM
AlienALX's Avatar
AlienALX AlienALX is offline
OC3D Elite
 
Join Date: Mar 2015
Location: West Sussex
Posts: 12,115
Quote:
Originally Posted by AngryGoldfish View Post
I think people do see the facts. I think they don't care enough. But it's like anything in society. People don't care if their favourite popstar is a horrible person. People don't care if their attachment to meat destroys the earth. People don't care if their kids are bullies. As long as they get what they want, that's it. Obviously that's a sweeping generalisation, but it's kinda true.
It just makes me laugh how quickly people do U turns like nothing happened. One minute they are making up all excuses under the sun, then taking out their pitchforks to lynch Intel for ripping them off, which they totally allowed them to do lmao.

It's kinda like that South Park ep.. "This way Wall Mart Comes", IIRC. The town is all upset and peed off because all of their local stores have closed down, yet the problem is obviously them being tight asses and shopping at the cheapest place.

It turns out the soul of the Walmart store is a mirror.....

Ahh, good old human nature.

__________________
"Those really high 20 series prices are just place holders"



Reply With Quote
  #22  
Old 14-01-19, 10:08 PM
tgrech tgrech is offline
OC3D Crew
 
Join Date: Jun 2013
Location: UK
Posts: 604
These aren't facts though, all the conclusions you've came to are based on conjecture, your comparisons are purely on marketing names, and marketing departments are often the least technically wise part of a tech company you can get, if you look at it on a mm^2 basis, Turing is the same price as Pascal, there's a fact for you. PS. At no point have I ever implied NVidia's pricing is in any way reasonable (Technically justifiable and logical != reasonable, for reference I havn't owned an NVidia card since I got given a 780).

Another one: NVidia spent more R&D money on Turing than any GPU manufacturer has ever spent on any GPU architecture, ever.
Or maybe: Turing dies are larger than their Fermi equivalents were [GF100= 529, GF104 = 331, GF106 = 240 whereas TU102 = 754, TU104=545, TU106 = 445]

NVidia's pricing has been fairly consistent in terms of GPU code names(Not marketing names) or die size, maybe marketing material or positioning would be different if AMD were competetive, but pricing won't budge as long as NVidia maintain mindshare, attempting to lay the onus for NVidia's pricing on AMD is neither useful nor does it fit into the timescales tech companies work on or how competition actually works in a duopoly. Look at Intel.

What has Intel done since AMD was competitive? Raised prices, across the board, at every marketing tier. Kept per mm^2 pricing exactly the same.

I'm not trying to argue for value or anything like that, my arguments re: Turing has always been that it's badly placed for the consumer market, in fact borderline completely irrelevant due to its price, but the pricing does have technological precedent. As I've said before, if AMD were competitive I don't think Turing would have ever reached consumers given how the smallest Turing die is the same size as Pascal's Titan's die, I honestly think we'd just have nothing instead, and I have no comment regarding a preference of the two personally but I think having DXR hardware in consumer hands is a very useful thing for software developers.
__________________
Electrical engineer with a passion for rambling.

Check out my latest piece of commissioned work: http://ehenleyshutters.com/novo-in-black/
[It's a company page not run by me, I just did the electronics & software design work]
Reply With Quote
  #23  
Old 14-01-19, 10:30 PM
AlienALX's Avatar
AlienALX AlienALX is offline
OC3D Elite
 
Join Date: Mar 2015
Location: West Sussex
Posts: 12,115
Quote:
Originally Posted by tgrech View Post
These aren't facts though, all the conclusions you've came to are based on conjecture, your comparisons are purely on marketing names, and marketing departments are often the least technically wise part of a tech company you can get, if you look at it on a mm^2 basis, Turing is the same price as Pascal, there's a fact for you. PS. At no point have I ever implied NVidia's pricing is in any way reasonable (Technically justifiable and logical != reasonable, for reference I havn't owned an NVidia card since I got given a 780).

Another one: NVidia spent more R&D money on Turing than any GPU manufacturer has ever spent on any GPU architecture, ever.
Or maybe: Turing dies are larger than their Fermi equivalents were [GF100= 529, GF104 = 331, GF106 = 240 whereas TU102 = 754, TU104=545, TU106 = 445]

NVidia's pricing has been fairly consistent in terms of GPU code names(Not marketing names) or die size, maybe marketing material or positioning would be different if AMD were competetive, but pricing won't budge as long as NVidia maintain mindshare, attempting to lay the onus for NVidia's pricing on AMD is neither useful nor does it fit into the timescales tech companies work on or how competition actually works in a duopoly. Look at Intel.

What has Intel done since AMD was competitive? Raised prices, across the board, at every marketing tier. Kept per mm^2 pricing exactly the same.

I'm not trying to argue for value or anything like that, my arguments re: Turing has always been that it's badly placed for the consumer market, in fact borderline completely irrelevant due to its price, but the pricing does have technological precedent. As I've said before, if AMD were competitive I don't think Turing would have ever reached consumers given how the smallest Turing die is the same size as Pascal's Titan's die, I honestly think we'd just have nothing instead, and I have no comment regarding a preference of the two personally but I think having DXR hardware in consumer hands is a very useful thing for software developers.
Fermi was badly placed too. And Nvidia soon realised it when they started losing money hand over fist.

Now what AMD should have done was look at what Nvidia did. Drop the core size and count, massively increase the speed and etc. But they did not, even after losing several more rounds afterwards.

I can absolutely bet that if Navi comes along at 1080 (or 2060 on a good day) performance at £250? you can bet that this card will drop to £250 overnight.

In fact, I would almost bet my entire PC collection that this card would cost £200-£250 if it were not for Polaris being nowhere near it.

I reiterate, Fermi cost Nvidia an absolute fortune but when it performs on par with a £300 AMD part? you need to drop the price. It really is as simple as that. Just like when the GTX 280 launched for £330 or so, with a really nice back plate etc. I bet my house that would not have happened if it were not for the 4870 and 4890.

So yes, all of this talk and etc? really boils down to something far more simple.

Oh and BTW whilst I'm having a good moan.

People, please, excusing the prices of these cards because they perform the same as cards at the same price in the previous gen is as daft as the line in my sig. Get real ! you are supposed to get more performance at the same price as the old gen that is called progress. Not stalling the market selling the same for, you guessed it ! the same. It's not supposed to work like that. It's supposed to be an upgrade £ per perf, not the friggin same !!!!
__________________
"Those really high 20 series prices are just place holders"



Reply With Quote
  #24  
Old 14-01-19, 10:35 PM
AngryGoldfish's Avatar
AngryGoldfish AngryGoldfish is offline
Old N Gold
 
Join Date: Jan 2015
Location: Ireland
Posts: 2,652
Quote:
Originally Posted by tgrech View Post
These aren't facts though, all the conclusions you've came to are based on conjecture, your comparisons are purely on marketing names, and marketing departments are often the least technically wise part of a tech company you can get, if you look at it on a mm^2 basis, Turing is the same price as Pascal, there's a fact for you. PS. At no point have I ever implied NVidia's pricing is in any way reasonable (Technically justifiable and logical != reasonable, for reference I havn't owned an NVidia card since I got given a 780).

Another one: NVidia spent more R&D money on Turing than any GPU manufacturer has ever spent on any GPU architecture, ever.
Or maybe: Turing dies are larger than their Fermi equivalents were [GF100= 529, GF104 = 331, GF106 = 240 whereas TU102 = 754, TU104=545, TU106 = 445]

NVidia's pricing has been fairly consistent in terms of GPU code names(Not marketing names) or die size, maybe marketing material or positioning would be different if AMD were competetive, but pricing won't budge as long as NVidia maintain mindshare, attempting to lay the onus for NVidia's pricing on AMD is neither useful nor does it fit into the timescales tech companies work on or how competition actually works in a duopoly. Look at Intel.

What has Intel done since AMD was competitive? Raised prices, across the board, at every marketing tier. Kept per mm^2 pricing exactly the same.

I'm not trying to argue for value or anything like that, my arguments re: Turing has always been that it's badly placed for the consumer market, in fact borderline completely irrelevant due to its price, but the pricing does have technological precedent. As I've said before, if AMD were competitive I don't think Turing would have ever reached consumers given how the smallest Turing die is the same size as Pascal's Titan's die, I honestly think we'd just have nothing instead, and I have no comment regarding a preference of the two personally but I think having DXR hardware in consumer hands is a very useful thing for software developers.
Am I missing something? The Titan Xp had 12 billion transistors on a 471mm^2 sie. Turing RTX Titan has 18.6 on a 754mm^2 chip. It uses TSMC's 12nm which is just a refinement of their 16nm.
__________________
ASUS X370 Crosshair VI Hero ⁞⁞ Ryzen 1600X 4Ghz ⁞⁞ Thermalright Le Grand Macho RT ⁞⁞ Aorus GTX 1080 11Gbps ⁞⁞ G.Skill TridentZ 3200Mhz
Jonsbo W2 ⁞⁞ Corsair AX760 ⁞⁞ Pexon PC ⁞⁞ Samsung 960 EVO 250GB & 850 EVO 500GB
⁞⁞ Western Digital 1TB Blue & 3TB Green
BenQ XL2730Z ⁞⁞ Mixonix Naos 7000 ⁞⁞ Corsair K70 Cherry MX Brown ⁞⁞ Audio-GD NFB-15 ⁞⁞ EVE SC205 ⁞⁞ AKG K7XX
Reply With Quote
  #25  
Old 14-01-19, 10:35 PM
tgrech tgrech is offline
OC3D Crew
 
Join Date: Jun 2013
Location: UK
Posts: 604
Yes, it boils down to the fact you're roughly getting very slightly less than a 1080Ti in die size and a 1080 in performance at around half the price less than 2 years later without a change in node.

Polaris or no Polaris, I doubt you're ever getting something like a 450mm^2 die at £200 from Nvidia again, they're too big for that now. And remember, the more transistors you can pack in each mm^2, the more expensive it costs to create each mm^2 from an R&D perspective.

Edit: You're missing nothing goldfish, am I missing something?
__________________
Electrical engineer with a passion for rambling.

Check out my latest piece of commissioned work: http://ehenleyshutters.com/novo-in-black/
[It's a company page not run by me, I just did the electronics & software design work]
Reply With Quote
  #26  
Old 14-01-19, 10:44 PM
AlienALX's Avatar
AlienALX AlienALX is offline
OC3D Elite
 
Join Date: Mar 2015
Location: West Sussex
Posts: 12,115
Quote:
Originally Posted by tgrech View Post
Yes, it boils down to the fact you're roughly getting very slightly less than a 1080Ti in die size and a 1080 in performance at around half the price less than 2 years later without a change in node.

Polaris or no Polaris, I doubt you're ever getting something like a 450mm^2 die at £200 from Nvidia again, they're too big for that now. And remember, the more transistors you can pack in each mm^2, the more expensive it costs to create each mm^2 from an R&D perspective.

Edit: You're missing nothing goldfish, am I missing something?
The only reason Nvidia cut die size was because like I said elsewhere, at that time it wasn't needed. Games then wanted clock speed over cores (see also CPUs). However, Nvidia had to retaliate to Vega with a proper DX12 based tech. Which is, from what I can gather, what Turing is. Back to the kitchen sink basically.

You are probably right though, I don't think we will ever see big old alu clad honking cores again. If AMD had done what they should have done (and what Raja was doing, making smaller higher clocked cards for cheap and selling them well) then maybe they could have caught up, and, pushed Nvidia into making mammoth sized dies. However, they haven't, and have just made one mistake after another.

I don't know what is in store for RTG in the future, but if it's more Vega they should just cut their losses and call it a day. It would be a shame to waste all of that cash made by Ryzen on toss GPUs.
__________________
"Those really high 20 series prices are just place holders"



Reply With Quote
  #27  
Old 14-01-19, 10:50 PM
AngryGoldfish's Avatar
AngryGoldfish AngryGoldfish is offline
Old N Gold
 
Join Date: Jan 2015
Location: Ireland
Posts: 2,652
Quote:
Originally Posted by tgrech View Post
Yes, it boils down to the fact you're roughly getting very slightly less than a 1080Ti in die size and a 1080 in performance at around half the price less than 2 years later without a change in node.

Polaris or no Polaris, I doubt you're ever getting something like a 450mm^2 die at £200 from Nvidia again, they're too big for that now. And remember, the more transistors you can pack in each mm^2, the more expensive it costs to create each mm^2 from an R&D perspective.

Edit: You're missing nothing goldfish, am I missing something?
Sorry, I misread your comment. You said, "I don't think Turing would have ever reached consumers given how the smallest Turing die is the same size as Pascal's Titan's die". But I read that as, "I don't think Turing would have ever reached consumers given how the Turing die is the same size as Pascal's Titan's die". Very different
__________________
ASUS X370 Crosshair VI Hero ⁞⁞ Ryzen 1600X 4Ghz ⁞⁞ Thermalright Le Grand Macho RT ⁞⁞ Aorus GTX 1080 11Gbps ⁞⁞ G.Skill TridentZ 3200Mhz
Jonsbo W2 ⁞⁞ Corsair AX760 ⁞⁞ Pexon PC ⁞⁞ Samsung 960 EVO 250GB & 850 EVO 500GB
⁞⁞ Western Digital 1TB Blue & 3TB Green
BenQ XL2730Z ⁞⁞ Mixonix Naos 7000 ⁞⁞ Corsair K70 Cherry MX Brown ⁞⁞ Audio-GD NFB-15 ⁞⁞ EVE SC205 ⁞⁞ AKG K7XX
Reply With Quote
  #28  
Old 14-01-19, 10:54 PM
tgrech tgrech is offline
OC3D Crew
 
Join Date: Jun 2013
Location: UK
Posts: 604
Nah that technical explanation doesn't hold up either, CPUs and GPUs scale in very different ways, most GPU calculations are what is known as "embarrassingly parallel" IE to scale to as many cores as you can feed them, it's not theoretically true of course but when you're performing the same operation millions of times simultaneously with each one independent of the other it practically is. Obviously in a practical design things like coherency start to play limiting rolls in excessive scailing but this shouldnt be compared at all to CPU scailing which generally has finite, measurable, hard theoretical maximums of performance scailing that you can predict with tools like Amdahls Law.

AMD only went wide at 14nm because they had to use 14nm LPP so it was impossible to chase clocks, GCN in itself isn't designed to be a wide architecture, in fact at 64CUs it hits a lot of inherent limits, GCN really shines when it's in like 8-16 CUs in an APU sipping power while delivering reasonable perf. It becomes a power hog at high CU counts.

Turing is large partly because it does more stuff, they've crammed more types of execution units(A lot more than just the Tensor & RT units) into each SM(The %age diversity exasperated by the use of 64FPU SMs as opposed to Pascals 128), and they need more of each SM to have more useful amount of the smaller %age stuff. Turing is not a good architecture for perf/mm^2 at all, in most cases atm it's a sea of dead silicon. Turing is literally, mathematically, by design, bad value (With regards to traditional shader perf). I honestly severely doubt they make as much from it as they did with Pascal, and their finances now seem to indicate that.

But, yeah AMD has kinda been doing this whole thing for much longer, making architectures with lots of silicon that most contemporary software at launch doesn't make great use of, and by the time it does the chips that led with the technology is possibly already outdated. But, you could say it's somewhat necessary for a company like AMD to be doing that, because they don't have the resources to be making different dies with vastly different sets of resource allocations for different markets like NVidia has since Kepler, so they have to push to make every mm^2 of their architecture count in some uses.
__________________
Electrical engineer with a passion for rambling.

Check out my latest piece of commissioned work: http://ehenleyshutters.com/novo-in-black/
[It's a company page not run by me, I just did the electronics & software design work]
Reply With Quote
  #29  
Old 15-01-19, 04:49 AM
jimma47's Avatar
jimma47 jimma47 is offline
OC3D Elite
 
Join Date: Jan 2013
Location: Hobart, Australia
Posts: 1,737
Quote:
Originally Posted by WYP View Post
I know, I'm just pointing out that the GTX X60 series graphics cards have been creeping up in price for a long time. Perhaps I could have been more explicit.

TBH people are focusing too much on "what XX SKU costs that much? But XX-1 SKU was less expensive". It's a silly argument. Yes, I'd like the RTX 2060 to be cheaper, who wouldn't, but for the performance it offers it isn't by any means a bad buy.
Spot on Mark.

Just came across this review and I was thinking my 980s are getting on a bit and only have 4gb Ram each (yeah so 4 total) so this may be a good way to go for my higher resolution requirements.

To put it in comparison - My 980s (flagship of the time, pre-Ti) were $800 AUD each back in 2014. This 2060 is $200 cheaper but has reasonably comparative frame rates (the only benchmark you still run is Valley and the 2060 scored a nice 10+fps better at each res.) It's much cooler and draws 50w less from the wall. Of course there are other gains around CPU/Mobo/RAM efficiency as well but for the most part it's still apples and apples.
Comparison here - https://www.overclock3d.net/reviews/...strix_review/4

So yes I get people are saying the mid range is creeping in price - but don't forget it's absolutely moving upwards in efficiency and performance as well. An xx60 is not always comparable to an x60. You have to look at the whole package Price and performance = value.

For me I'm better off getting a 2070 or just smashing on with my 9's until NV drop pricing on the 2080/Ti's. Still getting pretty decent frames at 3440*1440 so no hurry yet.
__________________
Rig Gallery - Krusty III

Reply With Quote
  #30  
Old 15-01-19, 05:33 AM
NeverBackDown NeverBackDown is offline
AMD Enthusiast
 
Join Date: Dec 2012
Location: Middle-Earth
Posts: 15,413
Really I do think people are jumping the gun here. Right now sure it's fine. But what if they release the 2050 and it's a 1060 3GB performer for $250-300? That's not worth it at all. It would cost more than the 1060 3GB card and pretty much make it a no brainier to buy a 2060, meaning better sales for Nvidia.
Really it depends on the product stack compared to old gen vs new gen. Since Nvidia keep raising the price it no longer just makes sense to compare xx60 to xx60. It's now based off price brackets. Which is a smart marketing move on there part and let's them get away with increasing price every Gen (by moving the top tier up in price everything below gets bumped too) and people seem to accept that but a different argument.
__________________
The Cost of Freedom -
"A price payed gladly, in the hopes that the free, live better."
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump










All times are GMT. The time now is 04:55 AM.
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2019, vBulletin Solutions, Inc.