AMD Engineer confirms that RDNA 3 will make use of 5nm and 6nm silicon

Don't care how it's done I just want comparable performance to Nvidia's next stuff (as they'll launch around the same time) in all areas of GPU performance that's around these days, rasterization, RT, AI upscaling, etc.
 
Don't care how it's done I just want comparable performance to Nvidia's next stuff (as they'll launch around the same time) in all areas of GPU performance that's around these days, rasterization, RT, AI upscaling, etc.

I think that may be why they are combining chips.

Navi was supposed to be "Scaleable" from day one. Meaning basically? Ryzen on a GPU. More than one core cluster. It wasn't. Nvidia will be doing it too, just seems a case of who will do it first now.

It's in everyone's interest. Small dies are much cheaper and far more successful when it comes to producing them by % so hopefully this will take the sting away from the stupid great tank dies that cost a fortune.
 
I think that may be why they are combining chips.

Navi was supposed to be "Scaleable" from day one. Meaning basically? Ryzen on a GPU. More than one core cluster. It wasn't. Nvidia will be doing it too, just seems a case of who will do it first now.

It's in everyone's interest. Small dies are much cheaper and far more successful when it comes to producing them by % so hopefully this will take the sting away from the stupid great tank dies that cost a fortune.

That's all well and good, but all the rumours point to the multi-chip design of RDNA3 being only for the super expensive stuff, namely Navi 31 and 32. The main benefits of Ryzen was that we could get 4 cores for the price of a 2 core, 6 core CPUs for the price of a 4 core, an 8 core for the price of a 6 core, a 12 core for the price of an 8 core, etc. RDNA3 however will be a 64 core at the price of a 48 core. The performance jump could be huge, but so will the price. We won't see the 'Ryzen of GPUs' for at least another generation. RDNA3 chiplet design is more like Threadripper, or even EPYC, than Ryzen.
 
That's all well and good, but all the rumours point to the multi-chip design of RDNA3 being only for the super expensive stuff, namely Navi 31 and 32. The main benefits of Ryzen was that we could get 4 cores for the price of a 2 core, 6 core CPUs for the price of a 4 core, an 8 core for the price of a 6 core, a 12 core for the price of an 8 core, etc. RDNA3 however will be a 64 core at the price of a 48 core. The performance jump could be huge, but so will the price. We won't see the 'Ryzen of GPUs' for at least another generation. RDNA3 chiplet design is more like Threadripper, or even EPYC, than Ryzen.

Anything released between now and the end of 2022 is going to be stupid expensive no matter what it is tbh.

This won't settle down properly until Intel have their own fabs able to make their own GPUs. What did they say? 2025? ICR.

The problem is that both AMD and Nvidia wait in a queue. And that queue has been of epic proportions.

I think it's high time Nvidia made their own fabs. AMD? don't have the capital. Nvidia however do.
 
That's all well and good, but all the rumours point to the multi-chip design of RDNA3 being only for the super expensive stuff, namely Navi 31 and 32. The main benefits of Ryzen was that we could get 4 cores for the price of a 2 core, 6 core CPUs for the price of a 4 core, an 8 core for the price of a 6 core, a 12 core for the price of an 8 core, etc. RDNA3 however will be a 64 core at the price of a 48 core. The performance jump could be huge, but so will the price. We won't see the 'Ryzen of GPUs' for at least another generation. RDNA3 chiplet design is more like Threadripper, or even EPYC, than Ryzen.


I see no reason why we should compare CPU architecture to GPU architecture in terms of MCM design. Sure it's the only other available MCM design introduced in the high performance field, but they require quite different designs. For all we know they could just have Normal RDNA cores separated from the AI/RT cores connected by an Infinity Fabric, instead of everything on one die split multiple ways. That would be quite a change from Ryzen.
 
I see no reason why we should compare CPU architecture to GPU architecture in terms of MCM design. Sure it's the only other available MCM design introduced in the high performance field, but they require quite different designs. For all we know they could just have Normal RDNA cores separated from the AI/RT cores connected by an Infinity Fabric, instead of everything on one die split multiple ways. That would be quite a change from Ryzen.

It's less architectural similarities and more about the end result for consumers—the prices we pay and the performance we get. Alien was talking about Navi being the Ryzen of GPUs, but I don't see that happening for a long time because the benefits of a multi-chip GPU will only be seen in the absolute bleeding edge of $2k graphics cards, while Ryzen benefited the lower end and TR/EPYC benefited the top-end. If MCM designs from RDNA4 will bring $200-600 GPUs with performance levels that normally would require $400-800 monolithic GPU's, then we'll see something more akin to Ryzen.
 
It's less architectural similarities and more about the end result for consumers—the prices we pay and the performance we get. Alien was talking about Navi being the Ryzen of GPUs, but I don't see that happening for a long time because the benefits of a multi-chip GPU will only be seen in the absolute bleeding edge of $2k graphics cards, while Ryzen benefited the lower end and TR/EPYC benefited the top-end. If MCM designs from RDNA4 will bring $200-600 GPUs with performance levels that normally would require $400-800 monolithic GPU's, then we'll see something more akin to Ryzen.

That's what will happen.

Die shrinks can only happen for so long. If this article is correct then that means (and I think I have it right) 5nm and 6nm. Beyond that is what? 3nm?

Each time you shrink the wafers become more and more expensive. Mostly because you are on brand new technology. I would imagine they will shrink so far, then start combining them. IDK how it will work, but the "Ryzen" thing was just an idea. That said IF works very well, especially on faster buses. So whether this will require PCIE4, PCIE5 and beyond for the bandwidth to connect them? I really don't know.

What I do know is that small GPUs are very cheap. Not only very cheap, but also far more successful than large ones. Mostly as you reduce the risk of hitting a bad section of wafer and so on. IDK if you have seen the 6500XT die but it's hilariously small. So small you could easily fit four of them in the same space that a 6900XT die takes, for example.

Now obviously right now on whatever node it's on isn't the time. There aren't enough CUs on there to make it worth it. As we shrink though? yeah then I could see why they would do it.

As I mentioned it's not only them doing it. Nvidia plan to do exactly the same, it's just that AMD announced it years ago now and it just hasn't happened yet.

BTW as for it being only for high end GPUs? I doubt that. If you made a die that was say, 10mm2 and then stuck four of them together, even if it were for the same end result as a die 20x20mm2 (so like four of the small dies) you would still have a much, much higher success rate than you would cutting the large one. Mostly because when you cut the large one you are quadrupling the risk of hitting a bad section of silicon, meaning the whole thing is pretty much useless unless you lazer off the bad area.

If you just concentrated on the same small die, then solder multiples of that die to a substrate (so just like Ryzen) then you make bank. Your product stack then works just like Ryzen does. so 4-8 cores is one die, 12-16 cores is two and beyond that you use 3 or four. Obviously with Threadripper there are four regardless of whether they are enabled and active or working.

That is why they were able to sell Ryzen so cheap at launch, and bring down the cost of high cored CPUs massively. And that is why Intel can't compete with them until they get back on the shrink train and get their cores down even smaller. Intel could probably make a 64 core CPU now. Only it would be absolutely friggin huge (I mean, add together four 16 core Ryzen modules) and the failure rate would be absolutely astronomical, as would the cost. Which is why Intel haven't even bothered since 3000 series Threadrippers came out. They simply just can not compete on the technology. IPC? Intel are OK. But the way Ryzen works is how AMD have literally left them in the dust when it comes to high end CPUs.
 
Last edited:
That's what will happen.

Die shrinks can only happen for so long. If this article is correct then that means (and I think I have it right) 5nm and 6nm. Beyond that is what? 3nm?

Each time you shrink the wafers become more and more expensive. Mostly because you are on brand new technology. I would imagine they will shrink so far, then start combining them. IDK how it will work, but the "Ryzen" thing was just an idea. That said IF works very well, especially on faster buses. So whether this will require PCIE4, PCIE5 and beyond for the bandwidth to connect them? I really don't know.

What I do know is that small GPUs are very cheap. Not only very cheap, but also far more successful than large ones. Mostly as you reduce the risk of hitting a bad section of wafer and so on. IDK if you have seen the 6500XT die but it's hilariously small. So small you could easily fit four of them in the same space that a 6900XT die takes, for example.

Now obviously right now on whatever node it's on isn't the time. There aren't enough CUs on there to make it worth it. As we shrink though? yeah then I could see why they would do it.

As I mentioned it's not only them doing it. Nvidia plan to do exactly the same, it's just that AMD announced it years ago now and it just hasn't happened yet.

BTW as for it being only for high end GPUs? I doubt that. If you made a die that was say, 10mm2 and then stuck four of them together, even if it were for the same end result as a die 20x20mm2 (so like four of the small dies) you would still have a much, much higher success rate than you would cutting the large one. Mostly because when you cut the large one you are quadrupling the risk of hitting a bad section of silicon, meaning the whole thing is pretty much useless unless you lazer off the bad area.

If you just concentrated on the same small die, then solder multiples of that die to a substrate (so just like Ryzen) then you make bank. Your product stack then works just like Ryzen does. so 4-8 cores is one die, 12-16 cores is two and beyond that you use 3 or four. Obviously with Threadripper there are four regardless of whether they are enabled and active or working.

That is why they were able to sell Ryzen so cheap at launch, and bring down the cost of high cored CPUs massively. And that is why Intel can't compete with them until they get back on the shrink train and get their cores down even smaller. Intel could probably make a 64 core CPU now. Only it would be absolutely friggin huge (I mean, add together four 16 core Ryzen modules) and the failure rate would be absolutely astronomical, as would the cost. Which is why Intel haven't even bothered since 3000 series Threadrippers came out. They simply just can not compete on the technology. IPC? Intel are OK. But the way Ryzen works is how AMD have literally left them in the dust when it comes to high end CPUs.

Yeah, I agree with all that.

Really what I was saying though was that RDNA3 won't be the Ryzen of GPUs that we were expecting way back when we heard about the Zen-like principles of CPUs being applied to GPUs. Maybe it's not even the TR of GPU's—maybe it's more like EPYC. Maybe RDNA4 will be the TR of GPUs, and thereafter (RDNA5 or whatever it'll be called) we'll actually see low-midrange GPUs that are 'stacked' tiny dies acting as one larger die, offering excellent performance for less money than if AMD (or Nvidia and Intel) went with the monolithic route.

Of course, I don't think RDNA4/5 will be affordable, even the low end. AMD will still sell their products at the rate of inflation we're seeing; it's just they'll be able to make money per GPU which in turn will improve R&D and production capacity. I doubt we'll see the cost savings that they will. Ultimately, what matters more than cost savings is fixing the availability crises. If it's easier to manufacture multiple tiny dies and stick 'em together and sell them for €200-1000 and provide gamers with GPUs, I'd be grateful for just that, even if AMD could be selling them for less.
 
Back
Top