Nvidia's AD102 Lovelace GPU reportedly offers gamers up to 18432 CUDA cores

Bah, so they didn't name it after Linda then :D

I don't think AMD have accelerated it. Ampere has been a wash since its conception. Poor yields, wrong company making the dies and so on.
 
Bah, so they didn't name it after Linda then :D

I don't think AMD have accelerated it. Ampere has been a wash since its conception. Poor yields, wrong company making the dies and so on.

I've heard rumours that Nvidia will be using Samsung's 5nm process, not TSMC's.

I think it's more complicated than Samsung's process node just being terrible. Ampere is a not a terrible architectural leap. It's nowhere near as good as past generations, but in and of itself it's decent enough. Availability is poor, but so is everything using TSMC's 7nm, which has been available for ages now. Power consumption is pretty bad with Ampere, but there are loads of cores, and it's not exactly miles behind RDNA2. And that high power draw clearly hasn't made the cards impossible to cool.
 
Meh... means nothing if only a minority of actual customers can get them without going to extraordinary lengths due to scalpers and retailers price gouging.
 
I've heard rumours that Nvidia will be using Samsung's 5nm process, not TSMC's.

I think it's more complicated than Samsung's process node just being terrible. Ampere is a not a terrible architectural leap. It's nowhere near as good as past generations, but in and of itself it's decent enough. Availability is poor, but so is everything using TSMC's 7nm, which has been available for ages now. Power consumption is pretty bad with Ampere, but there are loads of cores, and it's not exactly miles behind RDNA2. And that high power draw clearly hasn't made the cards impossible to cool.

Then they will have the same issues. IE, Samsung's 5nm is more akin to TSMC's 7.

Will it be enough to kick AMD's ass? yeah, probably. However, if they remain with Samsung AMD will have a chance to drop with TSMC and stay in line.

Bit annoying if you think about "What if?". IE, what if Nvidia stuffed all of that core tech onto a 5nm TSMC die. Better clocks, better power consumption ETC.
 
Then they will have the same issues. IE, Samsung's 5nm is more akin to TSMC's 7.

Will it be enough to kick AMD's ass? yeah, probably. However, if they remain with Samsung AMD will have a chance to drop with TSMC and stay in line.

Bit annoying if you think about "What if?". IE, what if Nvidia stuffed all of that core tech onto a 5nm TSMC die. Better clocks, better power consumption ETC.

Yeah, I know what you mean. But I still think there is more at play with their decisions than consciously choosing Samsung's process over TSMC as some sort of artificial drip feeding technique to milk consumers over longer periods of time. It could be that the only way for Nvidia to cram that many cores is with Samsung's process. Maybe their design specifically 'allows' it. Or maybe it's the only way to make the cards even remotely affordable. And if Nvidia are committed to increasing the raw horsepower and not necessarily working on clock speed and IPC improvements, maybe it makes sense for Nvidia to use Samsung. I know it sounds funny at this stage, but maybe it guarantees Nvidia will have a distinct advantage over the myriad of other companies using TSMC when it comes to wafer supply and design. Nvidia like to be distinct as we know. And maybe there is a disagreement between Nvidia and TSMC that cannot be resolved at this time. Maybe there's not enough 7 or 5nm to cater to everyone and Nvidia are forced to use Samsung. It just seems to me like there is more at play. It seems to me that if Nvidia could use TSMC's 5nm and crush AMD, but they don't, then there is a valid reason for it that we don't or may not ever know about. Nvidia don't want to stay competitive; they want to crush their competition.
 
Nowhere near as good a leap as Turing? :confused:

I didn't specify what past generation. ;) I think it stands to reason that Turing was one of the worst architectural leaps due to the extreme overpricing, and that my comment was referring to Maxwell and Pascal namely.

Besides, in terms of performance, Turing was actually decent. It introduced RTRT for the first time, Tensor cores for supersampling, and its performance-per-watt was on par with Pascal. The issue with Turing was its price and its refreshes. If the 2080 Super came out first as the 2080 for $600-650, the 2080Ti at $800-850, the 2060 at $300-350, and the 2070 at $400-500, Turing would have been a huge success. It would have been Pascal but with RT and DLSS.
 
Ampere has many facets. It's a better "leap" than Turing. However, Turing was a stop gap. It was a pigeon step with RT bolted onto a very slightly shrunken Pascal. It did, however, offer DLSS and RT for the first time.

Ampere looks good against Turing. However, it should have been so much better. Faster, cooler, higher clocks, less power consumption. As pointed out by a tech guy it's worse than Fermi was.

So I guess it's how you look at it really.

One fact no one can get away from is poor yields. And it has nothing to do with Covid, given that Samsung's factory is going as normal.
 
Back
Top