Alleged 3DMARK Results Leaks for Nvidia's RTX 4090 - Impressive Results

I know 3DMark tends to favour Nvidia, but I find it hard to believe that AMD will be able to compete with 90% faster than a 3090.
 
At this point, I think Nvidia could at any time release a card that is 4x faster than 3090Ti. They just dust off a design from the shelf that is good enough to beat AMD and go with it. With the 30xx series AMD came close. And it is known that Nvidia behaves like an egotistical maniac so for the 40xx cards they might beat AMD to a pulp just for fun.
 
Last edited:
At this point, I think Nvidia could at any time release a card that is 4x faster than 3090Ti. They just dust off a design from the shelf that is good enough to beat AMD and go with it. With the 30xx series AMD came close. And it is known that Nvidia behaves like an egotistical maniac so for the 40xx cards they might beat AMD to a pulp just for fun.

They have such dominance now, that they can literally do what they want with the market.
 
At this point, I think Nvidia could at any time release a card that is 4x faster than 3090Ti. They just dust off a design from the shelf that is good enough to beat AMD and go with it. With the 30xx series AMD came close. And it is known that Nvidia behaves like an egotistical maniac so for the 40xx cards they might beat AMD to a pulp just for fun.

I think you're giving Nvidia a little too much credit. "Nvidia could at any time release a card that is 4x faster than 3090Ti." I have never seen any evidence that this is even remotely true, both in the past or in recent years. If Navi 31 was faster than even a 4090Ti, Nvidia can't fast-track a 3nm follow-up architecture "at any time". They're tied to the same constraints as any other silicon manufacturer. They're not magicians.
 
add to that that vidocardz and other websites repost fake leakes and don´t even bother to correct it (see moores law is dead).


i don´t trust these leaked benchmarks much.
it´s an easy way for websites to create a few clicks.

early on some reported RDNA to be 3x faster then RDNA2.

https://www.pcgamesn.com/amd/rdna-3-gpu-rumour

currently many say it´s 2x faster with 2.4x times faster in RT.


the hype train must be kept moving.... ^_^
 
Last edited:
I know 3DMark tends to favour Nvidia, but I find it hard to believe that AMD will be able to compete with 90% faster than a 3090.

I think AMD got awful lucky last time around. The last time they went up against a stinker was the 5000 series. And whilst it wasn't as fast as Fermi it was much better behaved and much cheaper.

The last round Ampere VS big Navi was pretty much a repeat of that. The Fermi cards were a fair bit faster in DX11, but like RT it wasn't mandatory or at the time overly important. So the whole thing was very similar.

Ampere was actually specced completely differently. Like, wildly differently. It had far less CUDA cores and so on. In fact, every one including MLID got it completely wrong until the day before launch. So I guess his "sources" were wrong.

Whilst people like him can be informative for the most part they just go on rumour. The part that irritates me about people like him is that they always manage to manipulate things to make it look like they were right, even when they are completely wrong. Which kind of takes away from the credence a little.

I was expecting Ampere to actually blow my mind. It literally should have been totally mind blowing. Pascal was their last biiiig release and Turing was just a tiny step on from that. I was really expecting for Ampere to completely blow me away. In fact, going back to those rumours and sources? apparently Samsung was so bad they were going back to TSMC. So, what those leaks could have been is what Ampere was supposed to be and could have been. Totally different, on TSMC.

This next gen? will blow my mind. Even more so on the people who thought Ampere was good. It will be the true next generation away from Pascal and Turing, and should kick major backside.

However, it will not be as cheap as Ampere was supposed to be. No way, no how. That said the lower end cards should be absolutely brutal too, so there is that happy thought to hold onto. Hopefully this round the cards lower down the stack like the 60 and 60Ti will be as awesome as those class of cards in the past. The last decent 60 series we got was the 1060, and that was bloody ages ago now.
 
I think AMD got awful lucky last time around. The last time they went up against a stinker was the 5000 series. And whilst it wasn't as fast as Fermi it was much better behaved and much cheaper.

The last round Ampere VS big Navi was pretty much a repeat of that. The Fermi cards were a fair bit faster in DX11, but like RT it wasn't mandatory or at the time overly important. So the whole thing was very similar.

Ampere was actually specced completely differently. Like, wildly differently. It had far less CUDA cores and so on. In fact, every one including MLID got it completely wrong until the day before launch. So I guess his "sources" were wrong.

Whilst people like him can be informative for the most part they just go on rumour. The part that irritates me about people like him is that they always manage to manipulate things to make it look like they were right, even when they are completely wrong. Which kind of takes away from the credence a little.

I was expecting Ampere to actually blow my mind. It literally should have been totally mind blowing. Pascal was their last biiiig release and Turing was just a tiny step on from that. I was really expecting for Ampere to completely blow me away. In fact, going back to those rumours and sources? apparently Samsung was so bad they were going back to TSMC. So, what those leaks could have been is what Ampere was supposed to be and could have been. Totally different, on TSMC.

This next gen? will blow my mind. Even more so on the people who thought Ampere was good. It will be the true next generation away from Pascal and Turing, and should kick major backside.

However, it will not be as cheap as Ampere was supposed to be. No way, no how. That said the lower end cards should be absolutely brutal too, so there is that happy thought to hold onto. Hopefully this round the cards lower down the stack like the 60 and 60Ti will be as awesome as those class of cards in the past. The last decent 60 series we got was the 1060, and that was bloody ages ago now.

It's the arrogance of MLID that irks me. I know it's the leaker's game (well, he's more of a reporter than a leaker) to act confidently and deny any failings, but it bugs me. I don't see why a little humility and modesty can't be a part of the process. Even RedGamingTech is not overly cocky. It'd make all his predictions and reports much more watchable, because I enjoy his content otherwise. I don't enjoy watching someone self-congratulate themselves when 'his sources' are right, and then twisting things 'round when 'someone else's sources' that he reported on are wrong.

I was happy enough with Ampere; I thought it was a solid architecture. Nothing like Maxwell or Pascal, but it was a huge improvement over Turing. I'm pumped for Lovelace! I can't wait to see what they can do.

Same goes for AMD. I think AMD will be able to compete with most of what Nvidia can offer, but not a 4090 if these performance numbers are accurate. I think it'll be as the 6900XT was to the 3090. Some games it was faster, but by and large it was slower. And Nvidia then sealed that with the 3090Ti and the 3080Ti, making the 6900XT a hard sell due to its lack of DLSS, poor RT performance, and no CUDA acceleration or other Nvidia-specific features. The 6800, 6700, and 6600, I like pretty much all of those cards and their non-XT variants. If you can find the 10GB 6700 non-XT model in stock anywhere (very few were made), it's only €40 more than a 6650XT while being 10-15% faster and being more efficient. It's amazing!
 
It's the arrogance of MLID that irks me. I know it's the leaker's game (well, he's more of a reporter than a leaker) to act confidently and deny any failings, but it bugs me. I don't see why a little humility and modesty can't be a part of the process. Even RedGamingTech is not overly cocky. It'd make all his predictions and reports much more watchable, because I enjoy his content otherwise. I don't enjoy watching someone self-congratulate themselves when 'his sources' are right, and then twisting things 'round when 'someone else's sources' that he reported on are wrong.

Well we've joked around before on here that everything these days is a supposed leak. Only real leaks do not happen. Not 100% reliable, confirmable ones. For a simple reason - NDA. You break a NDA and your life will literally be ruined. And I don't joke. You would get sued so hard you would be paying it off for the rest of your life. So all of those leaks are done by people with approval. End of. No one on this planet would be dumb enough to break an NDA with Nvidia or Intel, no one.

So basically, as Barry The Baptist once said in Lock, Stock and two smoking barrels?

"You know what we want you to know".

And Nvidia deliberately fed false information on Ampere out into people like him who then looked utterly stupid the day after when the real Ampere rocked up with completely different specs.

But yeah, just like many others it has totally gone to his head. He now thinks he is some sort of leading authority on passing on information that the vendors want out there.

But as we know it is all fake. Not what they sometimes get fed that is correct, but that no one would ever be stupid enough to leak anything at all that they have signed an NDA against. It's literally signing your own suicide note.

I was happy enough with Ampere; I thought it was a solid architecture. Nothing like Maxwell or Pascal, but it was a huge improvement over Turing. I'm pumped for Lovelace! I can't wait to see what they can do.

Same goes for AMD. I think AMD will be able to compete with most of what Nvidia can offer, but not a 4090 if these performance numbers are accurate. I think it'll be as the 6900XT was to the 3090. Some games it was faster, but by and large it was slower. And Nvidia then sealed that with the 3090Ti and the 3080Ti, making the 6900XT a hard sell due to its lack of DLSS, poor RT performance, and no CUDA acceleration or other Nvidia-specific features. The 6800, 6700, and 6600, I like pretty much all of those cards and their non-XT variants. If you can find the 10GB 6700 non-XT model in stock anywhere (very few were made), it's only €40 more than a 6650XT while being 10-15% faster and being more efficient. It's amazing!

I would have been happy enough with Ampere as a stop gap had the RRP not been total bull crap. Like, if it had given gamers some relief after Turing, which was expensive. Because now? we are headed back in that direction.

That said, AMD are apparently closing in on 3ghz now. So if it really is their Maxwell moment they could still do very well indeed. Especially as, let's face it, RT is still all but pointless and I don't really see anything game wise on the horizon that will change that any time soon.
 
Until I see solid benchmark data from a decent reviewer it's all smoke and mirrors, sure it'll be better it would be stupid if it wasn't, but back to when they launched there last set of cards i wouldn't even listen to what nvidia puts out on reveal.

I expect it'll be a closer run thing than many would expect myself, regardless of people opinions on anything AMD do one thing right ROADMAPS so i expect it to be pretty solid idea on there part.

Maxwell moments unsure and while nvidia have the edge on software it's really getting less of an edge but sure they have some tricks.

Where I feel nvidia are going wrong is the laughing stock that is the metaverse rubbish and they are investing heavily into it, thanks but no thanks.

for me it's a wait and see and tbh i'm not upgrading for years yet not unless half life 3 comes out and needs some massively powerful card to run it outside of that i expect profits to tumble in all areas of tech myself.
 
A lot of it can be quite easily worked out theoretically with mathematics. So long as you have the supposed specs, of course. So there is a science to it, because it is science.

Red Gaming Tech and AdoredTV are particularly good at it (or were in the case of AdoredTV) and intelligent enough to articulate the information.

Both of them said Ampere was going to be bad. In fact, AdoredTV quit over it because of the hatred people were passing his way over being honest about it. I was sad about that, because Jim was very humble and not a total know all belmet like MLID.

Seriously, with regards to let's say, Navi III? it can be easily worked out if you have something literally as basic as a die size. Once you know that you can work out a transistor density and so on. Once you know how much they can fit on there and a loose clock speed? you can calculate quite accurately how it will perform.

THE biggest step for Nvidia this time will be what screwed Ampere - CLOCK SPEED. Ampere had WORSE clock speeds than Turing FFS. Far worse ! good 2080tis consistently hit 2100MHZ EASY and Ampere was a shrink meaning clocks should have risen exponentially. But they didn't, and that is quite possibly the number one reason why it was so poor. As soon as we got a rough idea of the clocks you could just tell it was going to be bad.

Now of course, people go on about Fermi not being so bad once Nvidia had used better transducers on Fermi etc etc. Go and look at the clock difference between a 580 and a 680. That is what you should be expecting. They clocked balls !

And that, I believe, is going to be the chasm between Ampere and whatever they are calling this latest TSMC tech. It will guzzle power, no doubt, because there will be an awful lot going on, but once again we come back to science. If you go on the clock speeds being rumoured? OK. So you have a smaller node, on a MUCH better fab, so how can it use that much power? easy ! the clock speeds. That is why, because they are obviously now able to push that herculean amount of CUDA cores way up in clock speed also.

But. At the same time? theoretically? so can AMD. And, they have found a way to drastically reduce their die sizes by tidying Navi up and removing a lot of the fat that was needed to push infinity cache. Meaning smaller, cheaper dies, that should clock utter balls. Hence - Maxwell.
 
Well the problem is complex i'll give you that much, but even TSMC don't know the full scope of the density as it's almost impossible to caculate that at best it's guess work on the density as it's such a tiny scale and even with a decent enough microscope to look at 1mm2 it'll take you a fair old time to count it lol

They will be better TSMC are way ahead so any lower nm on there part means improvement but it's just a number reallly not 100% fact but yes better.

I think nvidia can keep up but unless they have some good fabric in the works for how long can they keep up is my feeling atm, hence the old Intel glue statements and lets face it at this stage Intel are laptop and server i do feel and have felt they have been dead in the water for ages 2 years ago if they had been ready then sure it'll have been a good start but at this stage they got no chance.

The other specs give you more of an idea, but it depends on the workload with nvidia as sometimes they are all in use and others only half in use, I expect the 80% mark is possible but it's a different story for AMD as multi chip advantage doesn't really matter either way these cards are going to be very expensive but if AMD have that GPU ryzen moment then Nvidia better look out as like i said roadmaps with AMD lately very on point.

Still intrested to see the cards perform just not buying maybe in 2 more gens so 4 years seems more likely for me, plus other things to focus on atm.

if nvidia don't hit 3ghz i'll be surprised cause amd will easily as even my 6800xt hits 2700 without issue like most of them.

competition is a good thing and i bet nvidia are gutted that AMD is even anywhere close and they are close for me rdna2 has been solid can play everything fine sure dlss helped nvidia but that is fading so if amd keep progressing and gradually improving nvidia is going to need a new magic trick to be halo products and thats all they care about can they sell you a 4090ti for £2500 if they can they are happy bunnys.
 
Well the problem is complex i'll give you that much, but even TSMC don't know the full scope of the density as it's almost impossible to caculate that at best it's guess work on the density as it's such a tiny scale and even with a decent enough microscope to look at 1mm2 it'll take you a fair old time to count it lol

I don't think you understood.

When you look at Nvidias technology and their roadmap you can see how many transistors they have packed in. Thus, you can calculate how many can be packed in on a lower node. It is a little more difficult here because Samsungs node is different. As in, it is not as good as TSMC's. So it is more like a higher node on TSMC.

Once you calculate how many transistors they can pack in you can calculate how many CUs or Cuda cores they can pack in. Once you do that and have a rough clock speed in mind you can then calculate roughly how that will perform in real world terms.

This is why the rumours of how much faster this will be started a long, long time ago. It is also why Ampere was calculated to be far better than it was, barring one thing - no one knew exactly how Samsung's node would perform. Turns out it was crap, though some even predicted that.

They will be better TSMC are way ahead so any lower nm on there part means improvement but it's just a number reallly not 100% fact but yes better.

It's not just a number. If it were just a number Ampere would have been much better than it was, given it was supposedly 8nm. However, it has been said that it would be around what a 10nm TSMC wafer would be. IE, obviously it isn't just a number because TSMC are better. More expensive yes, but better because of it.

I think nvidia can keep up but unless they have some good fabric in the works for how long can they keep up is my feeling atm, hence the old Intel glue statements and lets face it at this stage Intel are laptop and server i do feel and have felt they have been dead in the water for ages 2 years ago if they had been ready then sure it'll have been a good start but at this stage they got no chance.

Nvidia won't keep up if they did it correctly this time around. They will absolutely and utterly pulverise AMD in performance terms, like they should have done with Ampere.

However. And the big however, is that Nvidia are relying on much bigger dies. Mostly because that is how their technology now works.

When Fermi launched it was heavily criticised because at no point was it ever designed for gamers. At all. It was designed for compute reasons, which is why it absolutely smashed ATI in folding, and then AMD. Because AMD had a Maxwell moment and were able to make GPUs that were literally half the size in wafer/die terms, yet performed literally as good in gaming. Which is what we all care about.

After that? AMD became obsessed with compute. Which is why cards like the 7970 and Fury ETC became very good at folding. As well as mining. They sucked for gaming, though. GCN? was AMD's Fermi. Why they went down that road has always remained a complete mystery to me.

Ironically it was at that stage that Nvidia did the complete opposite with Kepler. Removed the tankiness of it, cut it down and therefore replaced all of that heat with what mattered - clock speed. They sand bagged with Kepler, making it look extremely disappointing, then three days before launch "Ghost" (an overclocker) talked about a 60% bios. What he meant was they raised the clock speeds from around 400mhz to nearly a ghz. And they absolutely pasted AMD in terms of die size, clock speed and gaming performance. Because the 7970 die was much larger than that of the 680 and thus much hotter, more expensive etc.

The other specs give you more of an idea, but it depends on the workload with nvidia as sometimes they are all in use and others only half in use, I expect the 80% mark is possible but it's a different story for AMD as multi chip advantage doesn't really matter either way these cards are going to be very expensive but if AMD have that GPU ryzen moment then Nvidia better look out as like i said roadmaps with AMD lately very on point.

TBH Nvidia can't lose. Mostly because most people ate up Ampere and thought it was excellent. Therefore, when you go from a disappointing (because it really was deep down) node like that to one that will do what it should have done? you are always going to look good.

However, as we all know it isn't about looking good or winning in performance terms. It's about being able to provide a good, cheap product that people will buy.

Again we return to the whole "Waah, waaah, Turing is too expensive. Nvidia are ripping us off charging us this much, etc etc". Only as I have explained a thousand times, Nvidia were just doing Nvidia things. IE, charging 60% markup on their product. The same 60% they charged on Maxwell, AND Pascal, and etc etc. The difference then? Maxwell and Pascal and Kepler were all tiny. And superb at gaming. As soon as the die size grew? so did the cost. And that is the direction they are determined to head in because of compute, deep learning, etc. All of which is being repurposed for gaming apparently, but none of it crucially matters for gaming.

AMD on the other hand? are once again headed in the completely opposite direction. They will be able to sell GPUs FAR cheaper than Nvidia, and those will be the ones that sell and the ones people buy.

Because seriously, if you thought Turing was expensive you wait until you find out what the 40 series will cost with their $150 TO MAKE coolers strapped onto them cost. So that they can dissipate all of that heat, so that they can clock balls on those enormous dies.

Still intrested to see the cards perform just not buying maybe in 2 more gens so 4 years seems more likely for me, plus other things to focus on atm.

if nvidia don't hit 3ghz i'll be surprised cause amd will easily as even my 6800xt hits 2700 without issue like most of them.

competition is a good thing and i bet nvidia are gutted that AMD is even anywhere close and they are close for me rdna2 has been solid can play everything fine sure dlss helped nvidia but that is fading so if amd keep progressing and gradually improving nvidia is going to need a new magic trick to be halo products and thats all they care about can they sell you a 4090ti for £2500 if they can they are happy bunnys.

For gamers I will give the next round to AMD. If they make a 970/980 combo? it will be very bad for Nvidia.
 
The density is not an exact thing even TSMC don't know exactly how packed it is your taking about something that only very exact microscopes can see and your only see a tiniest part of it, doesnt matter the nm be it intel or tsmc or samsung or any other they all caculate it differently it's a number.

It's like the clock speed is also not an exact as it's different from ark to ark nvidias clockspeed isnt the same as amd they are different work differently it's BS marketing.

So until they are out it all means nothing.

It's why they add on a + or P or some term it's refined they know it's improved but they don't know by how much until they have made a wafer and made engineering samples.

all the clock speed tells you is the higher it is the more power it will use comparing company to company just doesnt work that way even on the same node it's about the ark.

So how many smarties are in the jar, maybe more in one than the other as to how they are arranged you see it's way more complex than we understand unless your working for tsmc.

it's the same in other things like cars, just cause one car has more horsepower doesnt mean that the lower end one can't beat it in a drag race.

As much as MLID has a ego, you should listen to the recent video he made with an Intel engineer that sums up what i'm trying to state, you simply can't compare things based purely on a number in any way at all on any spec until it's made and benched.
 
Last edited:
I think you're giving Nvidia a little too much credit. "Nvidia could at any time release a card that is 4x faster than 3090Ti." I have never seen any evidence that this is even remotely true, both in the past or in recent years.
I think the closest we've ever seen to that is the 6 month turnaround to "fix" Fermi after the 470 and 480 were hilariously bad. But how many of those improvements were already in the works? You can't just design, spin and release brand new silicon in months.
 
I think the closest we've ever seen to that is the 6 month turnaround to "fix" Fermi after the 470 and 480 were hilariously bad. But how many of those improvements were already in the works? You can't just design, spin and release brand new silicon in months.

Hey Ross ! good to see you matey. How goes it?

TBH the 470 and 480 were not bad. The coolers were bad. Like, awful. As soon as you tamed the heat they were fast cards and overclocked very well tbh.

All they did on the 580 was use better power circuitry and transistors (low leakage) and changed the cooler to a vapor chamber after capping the power use and etc to combat stuff like Furmark.

It wasn't really any faster, just much better behaved.
 
The density is not an exact thing even TSMC don't know exactly how packed it is your taking about something that only very exact microscopes can see and your only see a tiniest part of it, doesnt matter the nm be it intel or tsmc or samsung or any other they all caculate it differently it's a number.

It's like the clock speed is also not an exact as it's different from ark to ark nvidias clockspeed isnt the same as amd they are different work differently it's BS marketing.

So until they are out it all means nothing.

It's why they add on a + or P or some term it's refined they know it's improved but they don't know by how much until they have made a wafer and made engineering samples.

all the clock speed tells you is the higher it is the more power it will use comparing company to company just doesnt work that way even on the same node it's about the ark.

So how many smarties are in the jar, maybe more in one than the other as to how they are arranged you see it's way more complex than we understand unless your working for tsmc.

it's the same in other things like cars, just cause one car has more horsepower doesnt mean that the lower end one can't beat it in a drag race.

As much as MLID has a ego, you should listen to the recent video he made with an Intel engineer that sums up what i'm trying to state, you simply can't compare things based purely on a number in any way at all on any spec until it's made and benched.

I still don't think you understand what I am explaining to you.

You make it all sound like it's some sort of crap shoot, and no one knows etc. It isn't like that at all dude.

GPUs work on mathematics and science. That is how they are designed, and made, and taped out. As such can we predict the performance? yes, yes we totally can. There may be some small room for fluctuations, but we can assess the information we have and make predictions very easily.

Nvidia do not walk into this going "Well, we have no idea how it will perform but let's try our best shall we?". They know. They know from how the architecture works and on the specs and assumed predictions. Just like someone like MLID, Red and Adored can do exactly the same. No, none of them are geniuses and none of them could design a GPU technology, but they do have a firm grasp of what to expect once they know what is going into it. Adored and Red seem *a lot* more savvy and intelligent than MLID. However, it all works on the same theory.

The loose specs of the 4090 have been hovering around for ages now. IE, this -

According to preliminary information the RTX 4090 is to feature AD102 GPU with 16384 CUDA cores. The card also comes with 24GB of GDDR6X memory and TDP of 450W. NVIDIA is more than likely to announce this new model during GeForce Beyond broadcast on September 20th at GTC.

As such it takes a few quite simple mathematical sums to calculate how they will perform. You just take the uplift from the 3090, which will be CUDA cores (because they will be able to fit more on per area) and then you calculate how, for example, Ampere would perform with those CUDA cores at those clocks. That part? is very simple.

Ada, Lovelace, whatever name they are giving it *IS* Ampere. It is the same structural design. As such? it is easy to predict. Especially when you have a rough idea of what sort of spec it will be. Now some are even smarter than that, and just through looking at tech docs can even get a rough idea of how much Nvidia will be able to pack in at a certain lithography.

And this is why long before it launched Ampere was being slated. Slated for being a poor node. Slated because the performance was in no way what it could have been and so on and so forth.

This is why, for over two years, I have maintained that "Hopper, ADA, Lovelace etc etc" (whatever they were going to call the same thing on a smaller TSMC node) would absolutely kick all ass. And why I said it will not be cheap, certainly nowhere near as "cheap" as Ampere.

It's exactly the same formulas used with CPUs. Only CPUs are usually much easier to predict. "OK so AMD say that their next Ryzen CPU will have 15% higher IPC with X clock speed and so theoretically here is how it will perform".

With GPUs is it often much harder. Mostly because unlike CPUs it is a very closely guarded secret. It doesn't work like a CPU.

I've never changed my stance on Ampere. I'm not a genius, but I am intelligent enough to know that it could, and should, have been miles better. Turing was a stop gap, so I expected (note expected) it to be much better than it was. Which is probably why I am one of the few who was truly disappointed by it.

AMD have stated that they have tweaked their design. They have shown die maps, clearly showing what they have done. They have made the infinity cache part of the CPU smaller. Meaning they can cram in more compute cores, and the die shrink means they can put X more into X die size.

Do you see? this is why the rumours going around are that AMD are going to have a Maxwell moment make sense. Because they have come up with a tweak to Navi's design, meaning that they can bash out higher clocked GPU cores (because as you said, they can already do 2700 now !) yet they can make them smaller than before due to a die shrink *and* tweaking the design.

However. The bottom line with Navi? it is still Navi. And if you understand how it works? you can quite easily predict how it will perform, hence the rumours going around.

It has always been expected that when you shrink down you can put more into the same physical space. However. It is also SUPPOSED to automatically include lower power usage AND higher clock speeds. Which, in nearly every single case is what happens. If you have been following AMD? you will have seen the enormous clock rise in their GPUs. Huge ! Well, the exact same should have happened with Nvidia and Ampere. IE, the 2100mhz 2080Ti should have been able to go much faster on a lower node. But it couldn't.

That is not normal. Not normal at all. And it was all down to Samsung's node being poor. Not something you should ever expect from TSMC, hence why the 40 series cards are going to be so utterly brutal. However, AMD are on TSMC too !

The only big opposition to that rule? Intel. Intel have had huuuuuuge problems with die shrinks not doing what they are supposed to do. They were so bad Intel had to basically cancel them and continue working on refreshes to keep their products "fresh". Well, at least looking fresh. If you knew your stuff you knew they weren't what we really wanted they were just Intel struggling to shrink down their technology.
 
I wasn't saying you can't predict the performance by some of the specs merely that you can't predict the density of the chip, your litterally talking about atoms.

They will be better all round not just cause of the shrink but like you say the IPC etc.

Personally even if AMD are ahead they won't gain huge market share quickly they need a few gens of being on top to gain a lot more, nvidia will just keep doing what they do why would they change tactics, Intel are not going to bring anything to the table.

If nvidia get beaten it'll be the shortest gen in years they simply hate losing.

Same thought as at start until i see them out in the wild and benched then it all means nothing, that is the whole point of all these reviews by so many people.

Most of the real leaks are not leaks they are just fed news bits to the right people sometimes to build hype sometimes to catch someone out, Adored TV was by far the smartest of the bunch and at least had sense to tell his real opinion than worry about views.
 
Back
Top