WYP
News Guru
Nvidia's finally moving to 7nm.

Read more about Nvidia's Ampere series of GPUs reportedly arriving in 1H 2020.

Read more about Nvidia's Ampere series of GPUs reportedly arriving in 1H 2020.
Doubt it man. Nvidia's target market don't care for card size or anything like that. All they want is "moar power than AMD and GimpWorks RayTracing" over anything elseHopefully the reduced power consumption of Nvidias 7nm GPUs means we can finally start getting normal sized cards instead of bigger..
Plus 7nm has a larger physical shrink than they have power reductions, so the surface area(An x^2 value ofc) shrinks faster than the heat output, so these products will likely be harder to cool than easier. Hence why Navi's TDP is low by past gen AMD standards but needs coolers you'd past associate with high TDP cards.Doubt it man. Nvidia's target market don't care for card size or anything like that. All they want is "moar power than AMD and GimpWorks RayTracing" over anything else
And where the hell are they going to get enough 7nm fabrication capacity to launch a new GPU series?
End of q2 maybe...
And where the hell are they going to get enough 7nm fabrication capacity to launch a new GPU series?
End of q2 maybe...
They confirmed a while ago this would be Samsung 7nm which hasn't had any reported capacity issues yet or many other customers having only gone into full production a matter of months ago, only product released on it so far I think is their own Exynos 9825 mobile SoC I think in some regions for the Note10, so NVidia probably has very high priority on the list of customers if not most of the capacity for a period, but still yeah it'll likely be staggered launches stretching out till the end of q2 because it still takes a while to stockpile enough chips and boards and to get them all assembled and shipped in high quantities for a highly anticipated launch.Maybe they've been stock pilling it. Maybe that's why there is a shortage of 7nm?
EUV is not going to be a dead end for quite a while now and these node shrinks should be pretty useful for now, Ampere should be a huge node step as it's going from an update of a 20nm process to a true EUV 7nm node.Ampere is going to be a disappointment.
NVidia and the other GPU vendors (AMD & Intel) need to change the way cards are made as going to 7nm is approaching a dead end.
You can't use any realtime raytracing games in SLI (DXR is a DX12 extension) so that might explains why SLI Titans struggle with it lol.Yes Ampere will probably be faster than Turing but it is not enough, Ray Tracing will still be a struggle as it is now even on an RTX Titan SLI setup.
We know NVidia's designs for the RT units were never going to stand still, there's huge amounts of low hanging fruit architecture wise to improve the current raytracing acceleration units, and lots of areas of the raytracing pieline that we havn't even begun to "hardware accelerate" with specific units, I wouldn't be surprised at all if we see a >60% rise in top end RT performance. This isn't rasterisation where the industry has spent 4 decades and billioons of £'s of development to get to a stage where progress is meagre, progress in RT will absolutely put traditional shader performance gains to shame from a layman's respective. It would therefore indeed be incredibly dissapointing if Ampere was merely a shrink, but NVidias track record doesn't imply it will be.NVidia will never be able to improve RT enough by relying on node shrinks after 7nm and need to find other ways of incorporating it on graphics cards like having a separate chip for it.
It's coming, definitely coming, you can rest assured that as soon as MCM GPUs are viable for mass production, they will be being mass produced, all three companies are throwing spades at this side of things(Basically the issue is creating interconnects/fabric/interposers with enough bandwidth that don't have like a million little pins/bumps/points you need to get perfectly alligned).On the subject of separate chips we are getting to the point where vendors need to start using several smaller chips on a card rather than one huge one like with Turing and Volta to increase performance and make cooling easier.
Ampere is going to be a very expensive dead end for NVidia (and us) if it is more of the same (Turing) on a smaller node.
You can't use any realtime raytracing games in SLI (DXR is a DX12 extension) so that might explains why SLI Titans struggle with it lol.
Well I'm delightfully corrected that there is a game that finally uses the AFR libraries for DX12 mGPU support though of course this isn't through the SLI system software wise and it's still the case that any game that relies on traditional SLI(Still the vast majority of games that has any SLI support) holds said limitations.
The confusion is regarding my use of "SLI" to refer to the technology and his use of "SLI" to refer to having two NVidia cards in the system, my original statement is still correct using the stricter definition of "SLI".
You're just trying to change your argument to make it seem like you are still correct.
"SLI is a multi-GPU (Graphics Processing Unit) scaling engine, essentially a method of using more than one graphics card to boost in-game performance by up to +100% per additional GPU. The principle is beautifully simple, and it is equally simple to use because the technology is neatly contained within all modern GeForce graphics drivers and many GeForce GPUs."
Directly from Nvidia's site.