AMD Radeon RX 3000 Series Navi GPU Specs Leaked

Fury and Vega were AMD's Fermi. In a funny twist of events Nvidia are back to making Fermi and AMD are about to make Kepler.

And you know what? I foresee a massive comeback for AMD. All of this BS that heavy GPUs are supposed to do are limited to PC only. Meaning no one bothers to code for them.

/neck coming out.

I suspect that if AMD GPUs were coded for to utilise them properly then Vega 64 would have been almost as fast as the 1080Ti. However, due to no one bothering it isn't anywhere close.

I would also surmise that if people coded with the Fury in mind it too would have been a lot better (IE limit the game's engine to using 4gb or less). Thing is? Nvidia already had a 12gb consumer GPU out there so why give a F?

All of these supposed claims about "Oh it's HBM so don't worry if you think it's not enough because it is" all turned out to be false and when the card ran out of VRAM it then ground to a halt, before eventually crashing your PC. I know, I saw it in BLOPS III loads of times !

Then AMD say they have fixed it so I set to work trying to figure out this "fix". Well, by now I knew exactly what levels and what scenes made the VRAM buffer overflow, and so I tried them (with the same settings hacked into place because the game disables them when it sniffs the VRAM) again. The game no longer crashed, woohoo right? well no. After some investigation of my PC's resources it was clear that the card was "texture streaming" from my paging file on the hard drive. And as you can imagine this is as slow as the slowest thing in the world. FPS dropped from 60s to low 20s.

At least with Vega they sort of bothered putting more on. I still wouldn't buy a Vega card though.
 
I have seen these comments about how little power HBM uses compared to GDDR.

I don't think these people actually know what they are talking about to be frank.

I would agree there is a very small drop in power usage with HBM but not that much.

With the arrival of the 2080 Ti and RTX Titan it goes someway to proving that GDDR does not use much more power than HBM. If you compare the Titan V and the Turing cards the former has a slightly bigger core and the latter cards clocks higher but the actual difference in TDP between these 12nm cards is about 30watts.

https://www.techpowerup.com/gpu-specs/titan-v.c3051



https://www.techpowerup.com/gpu-specs/titan-rtx.c3311



I think the problem with AMD using HBM was it seemed like a good idea at the time and it looked like the way things would go in the future but unfortunately a lot of the promises and claims for the new memory did not materialise.



I have used a number of HBM equipped cards and they all tend to perform the same way.


1. They throttle a bit at 1080p compared to GDDR cards.


2. They are better at 1440p and 2160p where bandwidth counts for more than high clockspeed.



I would also agree with what other people have said that for professional use HBM is also very good.

Do you know who I'm talking about, the guy with the crazy hair? It's the dude who quite literally takes graphics cards apart and reviews them based on the component choices. The dude who overclocks using LN2. The guy who buys old GPUs and destroys just to learn why. The guy who writes articles for one of the biggest PC hardware sites in the world. I don't like the fallacy of, 'But bro, he's, like, totallyu so much smarter than you so you're wronggggg, brah'. But at the same time, I can't think of a more fitting response. The article breaks down the math. He's not just compared two TPU reviews out of context. The 16GB of HBM2 on the Vega FE card was tested using a DMM to be drawing no more than 20-30W. To reach the same kind of bandwidth as HBM2 that AMD have used in the past with GDDR5 (512-bit), power draw from the memory would go up to 80-100W. The memory bus of the 1080Ti (352-bit) would draw 60-70W of power. That's over double the power draw of HBM while considerably reducing bandwidth. GDDR6 wasn't available at the time of Vega so we're comparing the bandwidth of GDDR5 or GDDR5X and against HBM2.
 
AMD had problems they could never make a GDDR memory controller worth a darn. Don't forget they have total target the GPU power draw. If they would have used that 80-100W in memory, then they would have had to cut down on the core speed to lower power usage to stay in the target zone.

What really hurt was HBM pricing and availability. It cost almost double what GDDR costs. That added to the overall GPU price that had to be passed to the consumer. When Fury launched, HBM1 was in very short supply.

What I hated was how fast they abandoned us Fury owners. Driver improvements were nonexistent after 6 months, and everything went into improving 580 drivers when it launched.
 
I don't think it's far fetched at all that the navi 3080 will compete with the rtx 2070.
Nvidia has made some impressive technological advances with their new cards but all they've really achieved, with existing games, is to match the price performance points of the outgoing 10 series, which is pretty underwhelming.
When the 10 series came out, the 1070 was matching and beating the outgoing 980ti and titan cards for a lot less money.
So this stuff about a $250 card beating a $500 card is nonsense because the $500 shouldn't cost that - to match the gains of the 10 series it should be called the 2060 and be priced accordingly (and the 2080 should be the 2070).
Nvidia are selling a mid range card for high end money.
AMD have been consistently a generation behind in recent years, but Nvidia have effectively taken a step back and allowed them to catch up.
AMD navi will have a die shrink to 7nm which is significant - that will allow them to pump more clock cycles through them. All they have to do is match the 2070/1080/vega 64.
They've already managed vega 64 speeds. The navi chips will have 40 compute units vs 64 for the vega. But they have a massive die shrink, so they can run faster.
I heard that to beat the 2070 they'll have to run at over 2k mhz - well they managed to get the 590 running at 1600mhz vs 1300 ish for the 580, simply on a die shrink from 14 to 12 nm - the 590 is exactly the same as the 580, board partners can drop a 590 chip into a 580 pcb and they have a 590 card.
It's clearly a stop gap card - stop gap for what? well navi of course.
OCUK have been clearing out vega 56 and 64 cards pretty much at cost - sapphire have pretty much told them that they won't be making any more of them.
If amd and the likes of sapphire are clearing out their vega components, they're clearing them out for a reason.
If the Navi cards are going to offer similar performance but at prices where they can actually make a profit from them, well, why wouldn't they.
 
AMD had problems they could never make a GDDR memory controller worth a darn. Don't forget they have total target the GPU power draw. If they would have used that 80-100W in memory, then they would have had to cut down on the core speed to lower power usage to stay in the target zone.

What really hurt was HBM pricing and availability. It cost almost double what GDDR costs. That added to the overall GPU price that had to be passed to the consumer. When Fury launched, HBM1 was in very short supply.

What I hated was how fast they abandoned us Fury owners. Driver improvements were nonexistent after 6 months, and everything went into improving 580 drivers when it launched.

Yeah, Fury performance has plummeted in the last year. New games just aren't running well on those cards at all.

I don't think it's far fetched at all that the navi 3080 will compete with the rtx 2070.
Nvidia has made some impressive technological advances with their new cards but all they've really achieved, with existing games, is to match the price performance points of the outgoing 10 series, which is pretty underwhelming.
When the 10 series came out, the 1070 was matching and beating the outgoing 980ti and titan cards for a lot less money.
So this stuff about a $250 card beating a $500 card is nonsense because the $500 shouldn't cost that - to match the gains of the 10 series it should be called the 2060 and be priced accordingly (and the 2080 should be the 2070).
Nvidia are selling a mid range card for high end money.
AMD have been consistently a generation behind in recent years, but Nvidia have effectively taken a step back and allowed them to catch up.
AMD navi will have a die shrink to 7nm which is significant - that will allow them to pump more clock cycles through them. All they have to do is match the 2070/1080/vega 64.
They've already managed vega 64 speeds. The navi chips will have 40 compute units vs 64 for the vega. But they have a massive die shrink, so they can run faster.
I heard that to beat the 2070 they'll have to run at over 2k mhz - well they managed to get the 590 running at 1600mhz vs 1300 ish for the 580, simply on a die shrink from 14 to 12 nm - the 590 is exactly the same as the 580, board partners can drop a 590 chip into a 580 pcb and they have a 590 card.
It's clearly a stop gap card - stop gap for what? well navi of course.
OCUK have been clearing out vega 56 and 64 cards pretty much at cost - sapphire have pretty much told them that they won't be making any more of them.
If amd and the likes of sapphire are clearing out their vega components, they're clearing them out for a reason.
If the Navi cards are going to offer similar performance but at prices where they can actually make a profit from them, well, why wouldn't they.

Good points.
 
Of course we've been hyped up by amd rumours in the past - but why I think things might be different this time (with the caveat that I know nowt);

Navi is widely rumoured to be powering the Playstation 5, alongside a zen 2 chip, so, if true, they will have a lot of Sony development money to play with.
The PS5 is aiming to do 4k gaming, some say 8k, so that is where they are ultimately aiming with these chips.

Navi will have a significant die shrink to 7nm and a new architecture (or newish, we don't know yet). They've been behind Nvidia on tdp for a long time - this time they won't be and will have the temperature overheads to push clock speeds. (There's not a massive difference between Nvidia's 16nm and 12nm - they are even put on the same page by manufacturer TSMC).

There's no HBM2 this time, so they can actually make them at prices they can sell.
The chip sizes will also be smaller, which makes them cheaper, and possibly cut from a larger die.

They're only aiming for what is now going to be mid range performance - which is what the damn 2070 should be; how come a 'midrange' card is £500? That's mental - it came out priced more than the 1080 and gave only slightly better performance. It's the Nvidia cards that are priced wrong, not the incoming Navi cards.

I'm not an AMD fanboy but they stuck it to Intel with Ryzen and Threadripper, and they will do it even more with zen2/ryzen3xxx - they'll have 7nm chips and they should be able to do an 8 core at 5ghz at half the price of the overpriced Intel one, and/or add more cores. Intel are struggling with die shrinks and to add extra cores so there's not much they can do.

I think they may well stick it to Nvidia in 2019 - they don't even have to be more clever; they have a better die size, 7nm chips and Nvidia have totally failed to improve the price/performance ratio of the 20xx cards over the 10xx series.
They've added new tech but it's a sideways step - are these really going to be real time ray tracing cards, or simply be a historical footnote as the first ones to have it, and the next gen cards the ones that will actually be able to implement it in gaming with out large performance hits?

Anyway, point being, 2070 performance should not be beyond Navi and it's Nvidia that are looking unrealistically priced, rather than AMD imo.
 
I think it's worth noting that AMD boasted about Polaris' efficiency as well - but as soon as you pushed it to competitive clock speeds, efficiency was out of the window.
 
I think it's worth noting that AMD boasted about Polaris' efficiency as well - but as soon as you pushed it to competitive clock speeds, efficiency was out of the window.

Yeah, and they compared it to the GTX 960, not the 10 series which was Polaris' actual main competitor. I mean, Pascal wasn't out at the time, but it still shows that AMD's claims of efficiency are often made with tunnel vision, as if they are completely unaware of their competitor and are only comparing to themselves.
 
To be fair, AMD did state at the time that their efficiency claims were for certain cards, and that cards like the RX480 targeted performance/price rather than performance/watt, while cards lower in the stack shifted towards p/watt.
With the RX500 series they mostly scrapped the efficiency targets altogether to maximise performance/price across the board though.
 
Back
Top