Nvidia's Ampere GPUs are reportedly arriving in 1H 2020

Hopefully the reduced power consumption of Nvidias 7nm GPUs means we can finally start getting normal sized cards instead of bigger..
 
And where the hell are they going to get enough 7nm fabrication capacity to launch a new GPU series?
End of q2 maybe...
 
Hopefully the reduced power consumption of Nvidias 7nm GPUs means we can finally start getting normal sized cards instead of bigger..
Doubt it man. Nvidia's target market don't care for card size or anything like that. All they want is "moar power than AMD and GimpWorks RayTracing" over anything else
 
Doubt it man. Nvidia's target market don't care for card size or anything like that. All they want is "moar power than AMD and GimpWorks RayTracing" over anything else
Plus 7nm has a larger physical shrink than they have power reductions, so the surface area(An x^2 value ofc) shrinks faster than the heat output, so these products will likely be harder to cool than easier. Hence why Navi's TDP is low by past gen AMD standards but needs coolers you'd past associate with high TDP cards.
 
Last edited:
And where the hell are they going to get enough 7nm fabrication capacity to launch a new GPU series?
End of q2 maybe...

Maybe they've been stock pilling it. Maybe that's why there is a shortage of 7nm?
They confirmed a while ago this would be Samsung 7nm which hasn't had any reported capacity issues yet or many other customers having only gone into full production a matter of months ago, only product released on it so far I think is their own Exynos 9825 mobile SoC I think in some regions for the Note10, so NVidia probably has very high priority on the list of customers if not most of the capacity for a period, but still yeah it'll likely be staggered launches stretching out till the end of q2 because it still takes a while to stockpile enough chips and boards and to get them all assembled and shipped in high quantities for a highly anticipated launch.
 
Last edited:
Ampere was coming after Volta.

Turing only happened because AMD are crap. Nvidia have had all of the time in the world to get this ready. I guess they waited as long as it took to clear out all of the suckers* before they moved on.

*Turing buyers. I guess sales are slowing now. When you're this far ahead I guess you can do whatever you want.
 
I think it's probably true we wouldn't have got a wide consumer release of Turing if AMD were more competitive, but work on Turing began in 2014 and none of NVidia's official roadmaps before Turing went beyond Volta so it was almost certainly already coming as a HPC/high end developer/prosumer set of cards, and they must have committed to designing the smaller consumer dies at least 18 months before their respective launches, so it was likely spurred by the fact that Ampere was already designed with EUV in mind and all its changes in design rules from quite early in its development, and no EUV nodes are currently ready for GPU scale production, with the timescale pushed back slightly from some of the pre-2014 expectations, meaning they would have had to wait like 3-4 years between consumer releases. AMD managed to bypass this by betting quite early on EUV coming a little later than anticipated in 2014 and designing for the pre-EUV version of TSMCs 7nm and managing to grab a pretty sizeable chunk of early capacity.
 
Last edited:
If it is Samsung making these then we could see either a delayed release schedule or some kind of shortages/price gouging. Word on the street is that several chemicals linked to chip production are on a trade ban from Japan to Korea. Might not affect Samsung as much as the media are making out but without inside knowledge from Samsung it would be very hard to guess accurately.
Lets not forget that these will not just be competing with RDNA with Intel due to hit the market next year. Though there fab issues might make it a paper launch
 
Ampere is going to be a disappointment.

NVidia and the other GPU vendors (AMD & Intel) need to change the way cards are made as going to 7nm is approaching a dead end.

Yes Ampere will probably be faster than Turing but it is not enough, Ray Tracing will still be a struggle as it is now even on an RTX Titan SLI setup. NVidia will never be able to improve RT enough by relying on node shrinks after 7nm and need to find other ways of incorporating it on graphics cards like having a separate chip for it.

On the subject of separate chips we are getting to the point where vendors need to start using several smaller chips on a card rather than one huge one like with Turing and Volta to increase performance and make cooling easier.

Ampere is going to be a very expensive dead end for NVidia (and us) if it is more of the same (Turing) on a smaller node.
 
Ampere is going to be a disappointment.

NVidia and the other GPU vendors (AMD & Intel) need to change the way cards are made as going to 7nm is approaching a dead end.
EUV is not going to be a dead end for quite a while now and these node shrinks should be pretty useful for now, Ampere should be a huge node step as it's going from an update of a 20nm process to a true EUV 7nm node.

Yes Ampere will probably be faster than Turing but it is not enough, Ray Tracing will still be a struggle as it is now even on an RTX Titan SLI setup.
You can't use any realtime raytracing games in SLI (DXR is a DX12 extension) so that might explains why SLI Titans struggle with it lol.

NVidia will never be able to improve RT enough by relying on node shrinks after 7nm and need to find other ways of incorporating it on graphics cards like having a separate chip for it.
We know NVidia's designs for the RT units were never going to stand still, there's huge amounts of low hanging fruit architecture wise to improve the current raytracing acceleration units, and lots of areas of the raytracing pieline that we havn't even begun to "hardware accelerate" with specific units, I wouldn't be surprised at all if we see a >60% rise in top end RT performance. This isn't rasterisation where the industry has spent 4 decades and billioons of £'s of development to get to a stage where progress is meagre, progress in RT will absolutely put traditional shader performance gains to shame from a layman's respective. It would therefore indeed be incredibly dissapointing if Ampere was merely a shrink, but NVidias track record doesn't imply it will be.

On the subject of separate chips we are getting to the point where vendors need to start using several smaller chips on a card rather than one huge one like with Turing and Volta to increase performance and make cooling easier.

Ampere is going to be a very expensive dead end for NVidia (and us) if it is more of the same (Turing) on a smaller node.
It's coming, definitely coming, you can rest assured that as soon as MCM GPUs are viable for mass production, they will be being mass produced, all three companies are throwing spades at this side of things(Basically the issue is creating interconnects/fabric/interposers with enough bandwidth that don't have like a million little pins/bumps/points you need to get perfectly alligned).
 
Last edited:
You can't use any realtime raytracing games in SLI (DXR is a DX12 extension) so that might explains why SLI Titans struggle with it lol.

Really

Single RTX Titan stock SOTTR

YXuJlfy.jpg


Gm7hQPx.jpg





SLI RTX Titans stock SOTTR

bwKvDrq.jpg


TLObwBY.jpg




I think the above scaling is excellent when using DX12 and Ray Tracing.
 
Well I'm delightfully corrected that there is a game that finally uses the AFR libraries for DX12 mGPU support though of course this isn't through the SLI system software wise and it's still the case that any game that relies on traditional SLI(Still the vast majority of games that has any SLI support) holds said limitations.
 
Last edited:
Well I'm delightfully corrected that there is a game that finally uses the AFR libraries for DX12 mGPU support though of course this isn't through the SLI system software wise and it's still the case that any game that relies on traditional SLI(Still the vast majority of games that has any SLI support) holds said limitations.

Your talking with a guy who has arguably every Nvidia gpu in almost every possible configuration/setup. If he says SLI works. It usually does :D
 
The confusion is regarding my use of "SLI" to refer to the technology and his use of "SLI" to refer to having two NVidia cards in the system, my original statement is still correct using the stricter definition of "SLI".
 
Last edited:
The confusion is regarding my use of "SLI" to refer to the technology and his use of "SLI" to refer to having two NVidia cards in the system, my original statement is still correct using the stricter definition of "SLI".

You're just trying to change your argument to make it seem like you are still correct.

"SLI is a multi-GPU (Graphics Processing Unit) scaling engine, essentially a method of using more than one graphics card to boost in-game performance by up to +100% per additional GPU. The principle is beautifully simple, and it is equally simple to use because the technology is neatly contained within all modern GeForce graphics drivers and many GeForce GPUs."

Directly from Nvidia's site.
 
You're just trying to change your argument to make it seem like you are still correct.

"SLI is a multi-GPU (Graphics Processing Unit) scaling engine, essentially a method of using more than one graphics card to boost in-game performance by up to +100% per additional GPU. The principle is beautifully simple, and it is equally simple to use because the technology is neatly contained within all modern GeForce graphics drivers and many GeForce GPUs."

Directly from Nvidia's site.

Nah man it was previous obvious that when I said SLI doesn't work in DX12, I meant SLI doesn't work in DX12, not multi-GPU doesn't work in DX12, we all know there are rare examples like Civ and stuff. That scaling engine in your quote isn't used for DX12 multi-GPU modes, thus reinforcing what I was already saying, the driver SLI mechanism it is referring to is bypassed.
 
Last edited:
You are contradicting yourself. You cannot definitely say SLI doesn't work in DX12 to then say there are examples of it working.

SLI and Crossfire do work, clearly. There are more options and Devs are not limited to traditional SLI/Crossfire. You can still use non-AFR techniques to use it even in DX11.
 
At no point have I ever said SLI works in DX12, just that multi-GPU systems can work in DX12. Some people call two NVidia cards in one system "SLI" as shorthand slang, which is perfectly acceptable I guess, but clearly not what I was referring to. The drivers are bypassed completely in any multi-GPU mode in DX12, no SLI or CrossfireX mechanisms.
 
Last edited:
Back
Top