AMD Radeon Navi Launch Timeframe Leaked - Releases After 3rd Gen Ryzen

Yeah in theory they could launch with ~40CU (~RX580 equiv) parts as the highest Navi offering which should be tiny and fairly cheap on 7nm, not much larger than a Zen2 chiplet, while offering around Vega56 levels of performance from clock speed gains alone in theory & the same memory bandwidth on a 256-bit bus with 16Gbps GDRR6. Presumably a 128-bit die with ~20CUs will come with it as with Polaris' launch for the lower end of the market around the same time(And for mobile dGPUs particularly Apples lineup including iMacs), and hopefully "Scalability" means they've broke past the 64CU practical core scaling limit of past architectures so they will have something beyond a ~60 CU model eventually, possibly keeping HBM for an ~80CU variant to top off a 4-die line up more akin to NVidia's since Kepler.
 
~40CU, not much larger than a Zen2 chiplet


No way. The Zen 2 chiplet is ~80 mm2. Polaris 10 is 232 mm2. Even if 7nm scales it down 2x, it will not be under 100 mm2. When considering the extra complexity of the architecture and that the physical interfaces (GDDR6) don't scale down well, it will probably be much larger than 100 mm2.


Or, from the other direction, look at Vega 20, at 331 mm2 7nm. That's 64 CU. For 40 CU, if it scaled down linearly (which it won't, because the CUs aren't the only thing on the chip), it will be 207 mm2. It's definitely not out of the question that Navi is designed to have smaller CUs and otherwise save some space, but still, under 100 mm2 seems very unlikely for a 40 CU chip.
 
No way. The Zen 2 chiplet is ~80 mm2. Polaris 10 is 232 mm2. Even if 7nm scales it down 2x, it will not be under 100 mm2. When considering the extra complexity of the architecture and that the physical interfaces (GDDR6) don't scale down well, it will probably be much larger than 100 mm2.


Or, from the other direction, look at Vega 20, at 331 mm2 7nm. That's 64 CU. For 40 CU, if it scaled down linearly (which it won't, because the CUs aren't the only thing on the chip), it will be 207 mm2. It's definitely not out of the question that Navi is designed to have smaller CUs and otherwise save some space, but still, under 100 mm2 seems very unlikely for a 40 CU chip.

To be fair, Navi doesn't follow GCN much if at all, it's something like Super SIMD with instructions that enable the whole GPU to be capable of tensor operations. At least that's what I've seen with the patent for Navi.

And Vega 20 has DOUBLE the ROPs and double the compute hardware compared to Vega 10 so yea the scaling is non existent. And also it has double the HBM2 controllers so yes, it got far more complex.
Navi according to what's in the wild allegedly has half the compute performance of a vega 64 while almost matching it in graphical performance
which makes sense given it's headed for a console
Also, GDDR6 isn't really more complex to implement and what really killed the big vegas have been the use of HBM2.

Alas Vega 20 was also kind of smacked together as a last resort to test the 7nm node so it doesn't really reap the benefits of 7nm.
 
No way. The Zen 2 chiplet is ~80 mm2. Polaris 10 is 232 mm2. Even if 7nm scales it down 2x, it will not be under 100 mm2. When considering the extra complexity of the architecture and that the physical interfaces (GDDR6) don't scale down well, it will probably be much larger than 100 mm2.


Or, from the other direction, look at Vega 20, at 331 mm2 7nm. That's 64 CU. For 40 CU, if it scaled down linearly (which it won't, because the CUs aren't the only thing on the chip), it will be 207 mm2. It's definitely not out of the question that Navi is designed to have smaller CUs and otherwise save some space, but still, under 100 mm2 seems very unlikely for a 40 CU chip.

Zen1 used a 213mm^2 die(About same size as Polaris10), Zen2 uses a ~80mm^2 chiplet, and while they've obviously taken a lot of the uncore off the chiplet entirely here they've also likely increased execution unit count inside the core, I think around 100mm^2 is certainly possible given there is a trend of breaking up the cores a little more now to ensure they keep well fed when they scale to large counts, plus CPU dies are heavily taken up with SRAM which isn't always known to be a great scaler, but either way CPUs and GPUs won't scale to the same degree with 7nm and historically GPUs have scaled quite well.
I don't think we can take Vega20 as a good indicator of 7nm scailing because besides the architecture not being particularly well optimised for the node, they doubled up the memory controller & added a lot of new AI orientated instructions & buffed up relevant units. Navi can reach Vega1 levels of theoretical memory performance while maintaining Polaris' memory controller bit width by using upto 16Gbps GDDR6 instead of the upto 8Gbps GDDR5 on Polaris.
 
Last edited:
Let's talk about performance.

RX 680 (top of the line, supposed) will compete with RTX 2060.
RX 670 will compete with GTX 1660ti
RX 660 will compete with GTX 1660

Vega VII is competing against RTX 2080. AMD has no competition against RTX 2080ti. As for the replacement of VEGA 56 (VEGA 64 is actually VEGA VII), performance will be RTX 2070.

That's my prediction. Points in favour, will be that 7nm process will be very power efficent, and probably cheaper than Nvidia counterpart.
 
Personally I'd expect the RX680 to drop in quite neatly between the (Relatively close together and both using same silicon) RTX2060 & RTX2070 cards(Bandwidth wise we can expect something that lines up well with RTX2070), with an RX670 dropping in between the RTX2060 & GTX1660Ti. An RX660 would likely be a far bigger jump than the closely packed Turing line up allows with a drop straight down to half bus width & core count more likely, I think likely delivering performance between GTX1650 & GTX1660.
 
Let's talk about performance.

RX 680 (top of the line, supposed) will compete with RTX 2060.
RX 670 will compete with GTX 1660ti
RX 660 will compete with GTX 1660

Vega VII is competing against RTX 2080. AMD has no competition against RTX 2080ti. As for the replacement of VEGA 56 (VEGA 64 is actually VEGA VII), performance will be RTX 2070.

That's my prediction. Points in favour, will be that 7nm process will be very power efficent, and probably cheaper than Nvidia counterpart.

AMD will not compete with 2080ti now or any time soon. They simply don't have Nvidia's budget or resources. People need to understand this. Nvidia are years ahead, and that gap will only increase as time passes unless AMD decide to risk their entire company on a GPU.

Personally I don't think it's worth it. So long as they can make "a" GPU that is reasonable and keep improving Zen then I would be a happy bunny.
 
I don't think we should rule out an eventual Navi based RTX2080Ti competitor(Given we're comparing a 7nm card against a 14nm card), the 2080Ti is pretty much exactly 50% larger(Core count/die size) than the RTX2080. VegaII is a 331mm^2 die, so if they were able to scale Vega up to ~500mm^2 on 7nm they'd already have an RTX2080Ti competitor, AMD's big dies are usually 500-550mm^2 which is still far smaller than the RTX2080Ti's 775mm^2.
VegaII's die size is closest to the GTX1660Ti's of the Turing line up.

Of course, with 7nm yields so young and VegaII so close and there still not being an obvious economic choice for memory for such a card it's likely such a model wouldn't come in until around Spring next year.
 
Last edited:
And Vega 20 has DOUBLE the ROPs and double the compute hardware compared to Vega 10


No, it doesn't. It has the exact same number of ROPs, and I'm not sure what you mean by 'double the compute hardware'. It has the same number of CUs, and they are pretty similar.
 
I don't think we should rule out an eventual Navi based RTX2080Ti competitor(Given we're comparing a 7nm card against a 14nm card), the 2080Ti is pretty much exactly 50% larger(Core count/die size) than the RTX2080. VegaII is a 331mm^2 die, so if they were able to scale Vega up to ~500mm^2 on 7nm they'd already have an RTX2080Ti competitor, AMD's big dies are usually 500-550mm^2 which is still far smaller than the RTX2080Ti's 775mm^2.
VegaII's die size is closest to the GTX1660Ti's of the Turing line up.

Of course, with 7nm yields so young and VegaII so close and there still not being an obvious economic choice for memory for such a card it's likely such a model wouldn't come in until around Spring next year.

I think Nvidia are a lot further than we know. IE, I think they have been severely holding back their tech, whereas AMD are releasing everything they have in a desperate attempt just to stay current. The Radeon VII was/is still bug ridden and not performing how it should.
 
NVidia(Or anyone) couldn't make a die much bigger than GV100(Volta) really, and TU102(2080Ti/RTXTit) was only just shy of it so besides maybe memory configurations I don't think NVidia are holding back much at the moment(Besides not moving straight for 7nm as aggressively as they maybe could). But if Navi scales at least as well with 7nm as Zen2 seems to have done then they'll have an easy advantage until NVidia roll out 7nm parts, which could be a good 6 months later. Given AMDs general rise in R&D budget since Zen and the fact this architecture seems to also be heading for the consoles possibly in quite high CU configurations there's a good chance this will have been hindered much less by the shoe string budget Radeon group have operated on through most of the GCN years.

I think for now NVidia are mostly betting on AMD not having made significant architectural improvements with Navi, so that it more or less matches Turing in perf/watt and cost, but if Navi does have strong architectural improvements it would be a game changer on top of 7nms existing benefits in terms of price/performance possible.
 
Last edited:
I think Nvidia are a lot further than we know.

I don't think so. Although I'm pretty sure NVIDIA is going to make good strides with 7nm. Obviously NVIDIA could be a lot more competitive price-wise if it wanted, but tech-wise, I think we're getting what NVIDIA can offer.

I think that in the long run, both companies are going to go more into data centre GPUs, and considering the number of companies (including all the huge ones) working on game streaming, that's likely how many people will access GPUs in the long run.


Nvidia are years ahead, and that gap will only increase as time passes unless AMD decide to risk their entire company on a GPU.


Can't see why. NVIDIA has a bigger budget, but all AMD needs is a decent GPU architecture. Zen is a good example. Improving construction cores just went so far, there was need for a break, and it did allow AMD to get back into the game (with a little luck). AMD doesn't need to 'risk their entire company', it just needs to create a good design and keep moving forward.
 
Last edited:
Yeah datacentre & AI/enterprise GPUs are the reason AMD can't ignore of pass up making those big 500mm^2+ dies anymore. With datacentres density & scailing is everything and NVidia has dominated recently because they can cram essentially two GPUs worth of resources into one chip. All AMD needs is an ok core but that scales beyond their current limits and allows them to properly make use of 7nm density benefits. Vega20 is an easy stop gap but I seriously doubt that has more than an 18 month life in it as their top card from its November 2018 launch, it could easily be at the limits(Die size wise) of 7nm now but definitely not in a year.

AMD has never meaningfully fell behind in the sub-£250 space really, Polaris has been perfectly competitive with Pascal above £200(Crypto surges besides) and dominates it below £150 and even Turing is struggling to meaningfully compete in this space with this 3 year old architecture. It's only once you start to get to bandwidths where AMD needs HBM and starts to hit their 64CU limit that the tables flip, all AMD really needs is an architecture than can scale well to high core counts and doesn't need stacks of exotic memory to do so at more consumer viable core counts, they don't actually need to improve per CU performance at all to at least match Turing in many aspects with 7nm.
 
Last edited:
Much of Nvidia's advantage in consumer chips has also been due to better Dx11 driver optimisation. Now the newer approach in Dx12 and VK is a blessing for AMD, though Nvidia has somewhat caught up on architecture level with Turing.


If Navi manages to introduce good enough compression to work with GDDR6 instead of HBM they're in a good position for at least mid range consumer cards.
 
16Gbps GDDR6 only needs a 256-bit (Polaris10 sized) bus to match Vega1's half a terabyte of bandwidth, which combined with the base 30% increase in clock speeds from 7nm means that essentially a direct port of 36CU Polaris to 7nm with GDDR6 would more or less match Vega1 already. If they switched to a Tahiti style 384-bit bus width for a future ~60CU model then they could get around 768Gbps, given Vega20's 1Tbps seems overkill for gaming that could work quite well.

Then my guess is scalability means there's an 80-96CU model coming later and realistically only that one would need HBM now, and that can be their datancentre focussed card.

Basically from 7nm performance so far and the changes in memory since then I think we can realistically expect (If architecture has also improved slightly)
RX680 ~= Vega64 at RX580 prices + power consumption
RX670 ~= Vega56 at RX570 prices + power consumption
RX660 ~= RX580 at RX560 prices + power consumption

This makes it quite competitive with Turing but obviously NVidia will eventually roll out 7nm mainstream GPUs next year with much better perf/watt as a result of their architecture advantage (These perf/watt numbers wouldn't be too far from Turing, AMDs advantage lies heavily on the smaller die size allowing cheaper cards). The problem with NVidia's architecture advantage is it doesn't actually allow them to compete much better on perf/price on the lower end, where the uncore begins to dominate the die space.
 
Last edited:
AMD has never meaningfully fell behind in the sub-£250 space really

It depends on what you mean 'fell behind'. Obviously in terms of pricing AMD is good, but that's because it has lower margins. AMD's chips are more power hungry and larger for the same performance.

That said, from my point of view (as a developer) much of NVIDIA's success is down to the need to work in specific ways with their chips. AMD chips are more well rounded, in my experience, with NVIDIA requiring certain things to work in certain ways to get them to perform well. That pretty much forces everyone to work the NVIDIA way (because NVIDIA is the larger player). It's kind of like with CPUs, where there are a lot more Intel specific optimisations, so AMD is at a disadvantage.
 
Can't see why. NVIDIA has a bigger budget, but all AMD needs is a decent GPU architecture. Zen is a good example. Improving construction cores just went so far, there was need for a break, and it did allow AMD to get back into the game (with a little luck). AMD doesn't need to 'risk their entire company', it just needs to create a good design and keep moving forward.

Zen was a risk worth taking. Not only does it appeal to gamers it also appeals to every one else. It allowed them to create a massive product stack. I don't see what part luck had to do with it.

GPU on the other hand are expensive to design and execute and the market for those is arguably smaller too.

Creating a good design would cost lots of time and lots of money. That is of course if you want to pee with the big boys. Like I mentioned before Nvidia have a much bigger R&D budget (at least for now). It's all about money. You don't make money from nothing.

Going back to the Nvidia being far further ahead than we know and you disagreeing? Well I beg to differ. They held up "Volta" and "Turing" for ages and ages because according to Jen Pascal was still where it's at and we knew they had those "two" techs ages ago. We also knew when Turing launched that it should be replaced within a year or 18 months with something else that we already knew about.

And that is just using what we, the poggers, know. Not what the company knows or any of its secrets. One thing I do know about business though is like cards you never show your hand and you always stay one step ahead privately. At least one step.
 
At stock Pascal and Polaris die sizes & power consumptions are only about 15% apart. RX480 has a 230mm die, GTX1060 has a 200mm one. RX480 (stock) has a 150W TDP and you don't lose much performance by forcing the power target down to ~120W as you'd find in laptop variants, GTX1060 had a 120W TDP. Obviously because of 14LP's terrible clock vs power scailing curves, Polaris' efficiency dropped off much sharper as the clock speed increased, which is why you could also get 216W RX480/580's with only about 10-15% performance advantage over the 150W models, but much of that is just positioning it as an all out gaming card and actively sacrificing efficiency to do so.

Edit:
Zen was a risk worth taking. Not only does it appeal to gamers it also appeals to every one else. It allowed them to create a massive product stack. I don't see what part luck had to do with it.

GPU on the other hand are expensive to design and execute and the market for those is arguably smaller too.

This is no longer true. x86 CPUs are still by far the most complex & expensive type of processing chip you could design because even if they're relatively small they have an extremely high variety of execution units and a far greater per-mm^2 complexity(And a whole mess of legacy & licensing consideration that makes the cost against an ARM CPU several orders of magnitude higher). GPUs might be large but generally that's because they use several thousand identical examples of one fairly small core containing only a few types of execution unit.

Meanwhile, GPUs have quickly grown into the most in-demand market there is in computing really and is expected to continue on this path of rapid growth as AI continues to develop further, GPUs are now just as versatile as CPUs in many ways and in far higher demand in many enterprise & research areas.

To not throw all their resources behind catching up on GPUs now would be suicide for AMD. Everyone, including investors at this point, and obviously Intel, knows that GPUs are where the money is going forward. AMD can not survive as an x86 CPU company with mediocre GPUs, they will just end up back to the Piledriver/GCN days but in reverse, and they can't take that kind of hit twice in a decade.

There's a reason why AMD's R&D budget has ballooned over the last few years while the Radeon group has been significantly restructured with new leadership.

Yes, NVidia held Volta from consumers for around 8 months, but that's mostly because it wasn't yet viable economically even for a Titan device. Since Volta, NVidia has been at the absolute edge of what is possible with current nodes, if it's physically impossible to create a larger device and they don't yet have a new architecture or a new node to use(Even if they used 7nm there's no indication anyone could actually make such large devices on it yet) then they are at the physical limits of what they can do, period.
 
Last edited:
Personally I'd expect the RX680 to drop in quite neatly between the (Relatively close together and both using same silicon) RTX2060 & RTX2070 cards(Bandwidth wise we can expect something that lines up well with RTX2070), with an RX670 dropping in between the RTX2060 & GTX1660Ti. An RX660 would likely be a far bigger jump than the closely packed Turing line up allows with a drop straight down to half bus width & core count more likely, I think likely delivering performance between GTX1650 & GTX1660.

We already have the results.

The new 1660 (220$ US official price) is like 8% faster than RX 590. Considering the RX 590 was something like 6-10% faster than RX 580, we have that the new GTX 1660 is something around, at least 15% faster than RX 580.

this GTX 1660 is about 21% faster than GTX 1060 6gb. it's by the other hand , around 13% slower than GTX 1660ti.

That said, i don't think Navi RX "680" will beat RTX 2060. Hope it does, but it wont.
 
Back
Top