AMD rumoured to ship 12nm 'Polaris 30' GPUs in October

Hmmm... we've heard rumours of another Polaris release before, though they usually hinted at a wholly new graphics chip altogether, basically a Polaris 10 with more compute units. If it's just a refresh, I don't know how many more gamers can take when AMD is so far behind (either in performance, TDP, or timing) in so many other markets and sectors. It's like an admittance of defeat, almost. I agree with what you're saying in the article, Mark, that this could be a decent competitor to midrange RTX/GTX, but it could be hard for budget enthusiast gamers to get behind. I think it would have to offer something other than a slight performance boost.
 
couple typos Mark
Para 2 line 2 as to was and amd working on a Pascal- based?

Thanks for the spot fella, fixed.

Hmmm... we've heard rumours of another Polaris release before, though they usually hinted at a wholly new graphics chip altogether, basically a Polaris 10 with more compute units. If it's just a refresh, I don't know how many more gamers can take when AMD is so far behind (either in performance, TDP, or timing) in so many other markets and sectors. It's like an admittance of defeat, almost. I agree with what you're saying in the article, Mark, that this could be a decent competitor to midrange RTX/GTX, but it could be hard for budget enthusiast gamers to get behind. I think it would have to offer something other than a slight performance boost.

I agree with what you are saying, as in an ideal world Polaris shouldn't be a three-year architecture from AMD. That being said, not that many people care about TDP aside from the fact that it looks nice on a graph. The cost of electricity from a more power hungry video card is hardly enough to bankrupt a person.

If AMD wants a cheap refresh, the can do a Ryzen 2nd Gen with Polaris. Transition to 12nm with a minimal tweaking. It really depends on how far out AMD's Next-gen architecture is, and when we will see low-mid-range offerings that use it.

With such silence from Radeon, it is hard to do anything aside from speculate at this point. All we know about Navi for certain is that it targets 2019 and it will be using TSMC's 7nm.

Hopefully we will see AMD do a major driver overhaul soon, they like to do BIG driver releases every year, so it is about time for that. Adrenalin Xtreme Edition, lol.
 
I agree with what you are saying, as in an ideal world Polaris shouldn't be a three-year architecture from AMD. That being said, not that many people care about TDP aside from the fact that it looks nice on a graph. The cost of electricity from a more power hungry video card is hardly enough to bankrupt a person.

If AMD wants a cheap refresh, the can do a Ryzen 2nd Gen with Polaris. Transition to 12nm with a minimal tweaking. It really depends on how far out AMD's Next-gen architecture is, and when we will see low-mid-range offerings that use it.

With such silence from Radeon, it is hard to do anything aside from speculate at this point. All we know about Navi for certain is that it targets 2019 and it will be using TSMC's 7nm.

Hopefully we will see AMD do a major driver overhaul soon, they like to do BIG driver releases every year, so it is about time for that. Adrenalin Xtreme Edition, lol.

But I think people do care about TDP. It's not about the cost of electricity. As you know, TDP is going to affect temperatures and overclocking headroom as well, which in turn affects noise levels. People use these graphs to decide what card to buy. If Vega 64 had the same TDP as the GTX 1080, it would have been far more successful. If the RX 580 had the same TDP as the GTX 1060, it would be a far more successful GPU, at least in my estimation. It's not necessarily because gamers are conscious of their power draw and electricity bills, it's that it helps them determine who is the better of the two when all other factors are equal. The RX 580 and GTX 1060 are basically identical in performance and price, but the GTX wins in efficiency. Even for gamers who don't care at the end of the day, that could break the tie.
 
Interesting, I may upgrade the 580 in my HTPC if it is a 680, While only really a 1060 replacement it's welcome if the price is right.
 
But I think people do care about TDP. It's not about the cost of electricity. As you know, TDP is going to affect temperatures and overclocking headroom as well, which in turn affects noise levels. People use these graphs to decide what card to buy. If Vega 64 had the same TDP as the GTX 1080, it would have been far more successful. If the RX 580 had the same TDP as the GTX 1060, it would be a far more successful GPU, at least in my estimation. It's not necessarily because gamers are conscious of their power draw and electricity bills, it's that it helps them determine who is the better of the two when all other factors are equal. The RX 580 and GTX 1060 are basically identical in performance and price, but the GTX wins in efficiency. Even for gamers who don't care at the end of the day, that could break the tie.

Remember the die shrink means more performance per watt. Not necessarily more performance AND watts. So consumption shouldn't change from die shrink but will give much more performance. It may use a little bit more but that could be due to the PCB and whatnot they decide to add or change, or even faster memory.
 
Remember the die shrink means more performance per watt. Not necessarily more performance AND watts. So consumption shouldn't change from die shrink but will give much more performance. It may use a little bit more but that could be due to the PCB and whatnot they decide to add or change, or even faster memory.

Yeah, that's true. The RX 580 didn't really have the benefit of a process shrink. It was just a slight refinement of sort with a clock speed boost, likely due to manufacturing improvements and not because it's a different node.
 
One small detail that is worth noting is that GlobalFoundries' 12nm offers no die space savings over 14nm when used on AMD's Ryzen 2nd Generation processors, with both their 1800X and 2700X offering end-users the same die-size. Both dies are 213mm squared

Part of this change was designed to lower the engineering effort that AMD needed to put in when creating Ryzen 2nd Generation, with AMD focusing on minor tweaks like L2 Cache latencies, clock speed boosts and improved Precision Boost algorithms.

Ryzen 1st Gen Die (Via WikiChip)

950px-amd_zen_octa-core_die_shot.png


Ryzen 2nd Gen Die (Via WikiChip)

950px-amd_zen%2B_zeppelin_die_shot.png


If AMD is going for a new Polaris 10-based design on 12nm, it is possible that AMD could use the same die size for clock speed benefits, though they could use more design time to lower the die size by fully utilising 12nm's design changes. This could achieve die space savings of around approximately 12%. Image below from GlobalFoundries.

12LP_image-3-768x390.png
 
One small detail that is worth noting is that GlobalFoundries' 12nm offers no die space savings over 14nm when used on AMD's Ryzen 2nd Generation processors, with both their 1800X and 2700X offering end-users the same die-size. Both dies are 213mm squared

Part of this change was designed to lower the engineering effort that AMD needed to put in when creating Ryzen 2nd Generation, with AMD focusing on minor tweaks like L2 Cache latencies, clock speed boosts and improved Precision Boost algorithms.

I was thinking that as well, but I don't know enough about GPUs or CPUs to know whether performance gains can come from anything other than clock speed as it did for Ryzen+. I can only really think it paper specifications.

Some suggested that GDDR6 might be possible. That would offer a very nice performance boost.
 
I was thinking that as well, but I don't know enough about GPUs or CPUs to know whether performance gains can come from anything other than clock speed as it did for Ryzen+. I can only really think it paper specifications.

Some suggested that GDDR6 might be possible. That would offer a very nice performance boost.

The reduction in transistors will give the performance boost because it will increase clock speed. Not much else to gain because it's a refresh only faster memory which probably won't happen
 
The Xbox one X uses 40 Polaris-based compute units, the RX 580 uses 36.

Could we see an RX 600 series card with a larger Polaris-based GPU, closing the gap between the 580 and V56?
If it give more performance for less money, then I'm game.
 
Last edited:
The reduction in transistors will give the performance boost because it will increase clock speed. Not much else to gain because it's a refresh only faster memory which probably won't happen

Mark does mention "lowering the latencies of some of Polaris' internal caches" though. That's the type of stuff I don't know very much about.
 
Mark does mention "lowering the latencies of some of Polaris' internal caches" though. That's the type of stuff I don't know very much about.

Well that's if they decide to do anything about it. Because everything can be smaller they can get closer together. It's a negiglible difference if all they do is shrink everything. However if they slightly refine and retune stuff then you can definitely get decent results. AMD seems to just have the biggest problem of memory starvation it seems on there cards and what I mean is they are just not very efficient in memory bandwidth and compression. They need massive pipelines (ie 512bit) yet they are always way behind in Nvidia for memory speed(even with similar 8gbps chips)
 
It's worth noting that physically shrinking a die with today's transistor densities directly impacts the maximum possible TDP of the chip(And therefore clock speeds in many instances), as you increase the density of the heat being produced while reducing its thermal conductance area. Using smaller transistors(For lower power consumption and faster clock rates per transistor) at the same physical spacing is currently the best way to allow a chip to hit higher clock speeds. This is likely why the 2700X has a ~>10% higher increase in its default TDP. Remember, high TDP designs are a engineering choice: You lose efficiency at an x^2 rate best case scenario(If you need voltage increases) but can hit higher performance targets per mm^2(And therefore allow better price/performance). This is why Vega chips can reach a significant portion of their performance at almost half their default TDP. Their clock curves are tuned for mobile and APUs, with a sharp ramp up in power consumption when you go balls-to-the-wall.

This was the same with Fury, which is why you had >300W water cooled cards and 175W Nano cards hitting almost the same performance levels. And we can see a similar trend with AIBs "Nano" Vega variants.

NVidia ship their cards with clocks lower down their power curve, but overclocking that requires increasing the power budget of the cards will see their top models hit Vega levels of power consumption for similarly small performance gains(We're talking ~15% ish, IE a few FPS). If you want an example of an Nvidia card that makes similar sacrifices, check out the power consumption to relative performance of the RTX2080Ti (it's not pretty).
 
Last edited:
It's worth noting that physically shrinking a die with today's transistor densities directly impacts the maximum possible TDP of the chip(And therefore clock speeds in many instances), as you increase the density of the heat being produced while reducing its thermal conductance area. Using smaller transistor at the same physical spacing is currently the best way to allow a chip to hit higher clock speeds. This is likely why the 2700X has a ~>10% higher increase in its default TDP. Remember, high TDP designs are a engineering choice: You lose efficiency at an x^2 rate best case scenario(If you need voltage increases) but can hit higher performance targets per mm^2(And therefore allow better price/performance). This is why Vega chips can reach a significant portion of their performance at almost half their default TDP. Their clock curves are tuned for mobile and APUs, with a sharp ramp up in power consumption when you go balls-to-the-wall.

Yeah, that's a good point. Polaris is the same as far as I understand it. Using AMD's WattMan you can get the RX 480 to be very efficient, but by then it has reduced performance. It's not huge, which is the point you're making, but it's enough for the FPS graphs to favour Nvidia.
 
Back
Top