AMD Radeon RX 3000 Series Navi GPU Specs Leaked

Mining = dead, prices = Polaris launch prices.

Again hard to believe, but hey you never know. If they are small diamond shaped dies they'll cost peanuts.
 
If the top end card is only 15% faster than a Vega 64 I really hope AMD have a high end offering up their sleeves.
 
If the top end card is only 15% faster than a Vega 64 I really hope AMD have a high end offering up their sleeves.

Not yet they don't and if they know what's good for them they won't bother.

Polaris was awesome, and their savings were passed on to the customer. If Navi is more of the same at Vega performance they need to do nothing more, IMO.

Fury was a fail, Vega is a fail and both were a complete waste of money IMO. AMD are much better off making bread and butter cards like Polaris IMO. I can't see them competing at the top end again tbh, not if they know what's good for them.

They can't go on risking big fat old power hogs like Vega. They cost loads to develop, loads to make and usually end up far too expensive. The 56 should cost far less than it does, but big and fat and HBM make it unnecessarily expensive for the performance out.

If they can get anywhere near 1080 performance it's a major win.
 
It’s only a matter of time before AMD adopt the infinity fabric process from their CPUs on their GPUs, once they shrink the GPU die small enough to fit multiple dies on a substrate like their Ryzen CPUs and bring the manufacturing costs way down.
 
Yep, I think we're past the days of AMD creating monolithic dies, at least for a little while, 7nm really isn't suited for it atm. I'd say it's now more likely AMD's top end chip will still build on the use of interposer's & HBM from Fury & Vega but those interposers will now hold a couple of Navi 10 dies rather than a scaled up die as they did with Hawaii or Fury. It'd bring the same economies of scale benefits to their Radeon line as it has to Ryzen & TR & Epyc if they can also unify each generation around a couple of small dies. Presumably the Navi 12 chip will also be shared with Ryzen G parts. This would also help explain the adoption of PCIe4.0, as IF uses the same physical links so presumably has had a similar bump in speed with it.
 
Last edited:
It’s only a matter of time before AMD adopt the infinity fabric process from their CPUs on their GPUs, once they shrink the GPU die small enough to fit multiple dies on a substrate like their Ryzen CPUs and bring the manufacturing costs way down.

Yup. What we still have yet to find out is what exactly AMD meant by "Scaleable" when they touted Navi ages ago. What if, for example, adding more than one card links up the cards and basically doubles the size of the GPU? I mean yeah, it would create some latency but it's got to work better than what we have now.

IDK, I just think AMD are best suited to making value products, not class leading ones. Ryzen is still quite far behind Intel yet they are selling two to every one Intel sells.

I'm also going to stick my neck right out on this and say that AMD are not a greedy company. I've never, ever felt that vibe coming from them. They always have to offer basically twice what Intel and Nvidia do for half of the cash for people to even usually consider them, that is how hard it is being AMD. It's all about mindshare, and them having to literally offer you something you would be an utter fool to pass up.

They are still releasing fully unlocked CPUs across the board at great prices and we are even getting little Easter eggs like the recent Athlon that you can't overclock only you can. Intel never ever allow that. I mean god, could you imagine if one of their partners released a board that unlocked Xeons? lmfao, the pimp hand would be swift !

So yeah, products we need please AMD not want. At least then we have a choice.
 
AMD hijacking NVIDIAs next designations is Epyc! "RX 3080" (your move NVIDIA RTX 3080) :lol: almost as amusing as the i3, i5 and i7 punning with Ry 3, Ry 5 & Ry 7s.

The next law suits I foresee will be likeness rights and product naming.
 
It is good to see GDDR6 in the specs.

HBM has cost both AMD and the end user a lot of money for little benefit.
 
It is good to see GDDR6 in the specs.

HBM has cost both AMD and the end user a lot of money for little benefit.

Agreed. HBM should have been left to the prosumer and server market. Creating GPUs that required HBM to be manageable was a thorn in their side, at least for gamers. If HBM was easy to manufacture, fine. Or if the GPU itself was an absolute beast, fine. But even now years after its initial design, it's still prohibitively expensive and AMD's GPUs are not powerful to warrant such a complex and advanced memory design.


As to the report, I doubt both the Ryzen 3000 and Navi 3000 series leaks. They all seem too good to be true. I struggle to believe the majority of it. Of course, if the RX 3080 is actually $250 and can compete against an RTX 2070 with a 150W TDP (that's better than NVidia!), I will sell my GTX 1080 and go back to AMD. I don't really need much more performance than a 1080. I just want an affordable, efficient 1440p card from team Red so I can go back to Freesync.

And if the 3600X is really $230 at eight cores, sixteen threads with a 95W TDP and a boost clock of 4.8Ghz, I'll buy that. Again, I don't need more than eight cores.

But all these things sound just a little bit too good to be true.
 
AMD hijacking NVIDIAs next designations is Epyc! "RX 3080" (your move NVIDIA RTX 3080) :lol: almost as amusing as the i3, i5 and i7 punning with Ry 3, Ry 5 & Ry 7s.

The next law suits I foresee will be likeness rights and product naming.

This type of marketing works. Can't blame them
 
It is good to see GDDR6 in the specs.

HBM has cost both AMD and the end user a lot of money for little benefit.

Vega 56 and 64 would have been MUCH cheaper had they gone with GDDR5X instead of HBM2 which would have increased sales significantly.
 
Vega 56 and 64 would have been MUCH cheaper had they gone with GDDR5X instead of HBM2 which would have increased sales significantly.

But a 512-bit bus like on the 290X with GDDR5X would increase power consumption by around 80W. Considering Vega 64 is already absurdly power hungry given its meagre performance, Vega would have had to have been a completely different architecture to function with GDDR5. I know I said myself that HBM hindered AMD. So what I meant was, Fury, Vega, they never should have been the designs they were. I don't necessarily think they had much of a choice, at least with Fiji, but still, from the very beginning AMD was going in the wrong direction. Of course, we all know this already so I don't mean to be preachy. :p
 
But a 512-bit bus like on the 290X with GDDR5X would increase power consumption by around 80W. Considering Vega 64 is already absurdly power hungry given its meagre performance, Vega would have had to have been a completely different architecture to function with GDDR5. I know I said myself that HBM hindered AMD. So what I meant was, Fury, Vega, they never should have been the designs they were. I don't necessarily think they had much of a choice, at least with Fiji, but still, from the very beginning AMD was going in the wrong direction. Of course, we all know this already so I don't mean to be preachy. :p

If it had GDDR over HBM they would probably make it a smaller bus than 512. It's faster and therefore can get the same bandwidth with a smaller bus. Assuming they could get away with it. Hard to know without being a architecture engineer
 
If it had GDDR over HBM they would probably make it a smaller bus than 512. It's faster and therefore can get the same bandwidth with a smaller bus. Assuming they could get away with it. Hard to know without being a architecture engineer

Yeah, true. But even a 352-bit or 384-bus can be quite power consuming, based on what I read from the guy with crazy hair who works with the long-haired dude who takes GPUs apart. xD
 
HBM was only a bad idea from the perspective of gamers, and to be fair if AMD had continued being the only GPU customers using HBM2 they probably wouldn't have hit supply issues with Vega (You could argue GV100's liberal use of HBM2 in enterprise products just as AMD wanted to use it for consumer products was the true killer for Vega & its price). AMD will continue using HBM because it sells cards in the markets with the fattest margins, it improves their market position in segments that matter, it improves server density, power efficiency, scalability, ect and is likely going to be an almost fundamental requirement of any GPU using multi-die modules due to bandwidth constraints.

AMD doesn't have the money to do what NVidia does and create 5 different SKUs for every market segment, they make parts for where they know matters and adjust for other markets in pricing. It was presumably cheaper for them to keep the HBM2 design for Vega that to go back to the drawing board and attempt a GDDR5 version which would have inevitably be memory starved & even more inefficient.
 
Yeah, true. But even a 352-bit or 384-bus can be quite power consuming, based on what I read from the guy with crazy hair who works with the long-haired dude who takes GPUs apart. xD

I have seen these comments about how little power HBM uses compared to GDDR.

I don't think these people actually know what they are talking about to be frank.

I would agree there is a very small drop in power usage with HBM but not that much.

With the arrival of the 2080 Ti and RTX Titan it goes someway to proving that GDDR does not use much more power than HBM. If you compare the Titan V and the Turing cards the former has a slightly bigger core and the latter cards clocks higher but the actual difference in TDP between these 12nm cards is about 30watts.

https://www.techpowerup.com/gpu-specs/titan-v.c3051



https://www.techpowerup.com/gpu-specs/titan-rtx.c3311



I think the problem with AMD using HBM was it seemed like a good idea at the time and it looked like the way things would go in the future but unfortunately a lot of the promises and claims for the new memory did not materialise.



I have used a number of HBM equipped cards and they all tend to perform the same way.


1. They throttle a bit at 1080p compared to GDDR cards.


2. They are better at 1440p and 2160p where bandwidth counts for more than high clockspeed.



I would also agree with what other people have said that for professional use HBM is also very good.
 
Back
Top