AMD's Radeon RX 5700 XT appears on 3DMARK Time Spy

If the price is the same as an RTX 2070, I don't see the point for anyone to buy it over an RTX 2070 other than those who refuse to have an Nvidia card. There's not a lot about Navi 10 architecturally that I see to incentivise buyers. So rather than aiming at gamers and giving them something unique that sets them apart and their products, they're potentially aiming at AMD fans only. I don't agree with that. I could understand with Vega and how expensive it was to manufacture, and how enthusiasts would still buy Nvidia even if the competition was equal, so why undersell themselves? But Navi should not be as costly and is seemingly an architecture built for gaming across a wide range of applications. It's attempting to be an all-encompassing gaming architecture. Why price it above what most gamers can afford?

I'd love to know what's going on at the top of the AMD management pile. If they do price match Nvidia, what is their aim? What is their long term goal? What have their marketers and analysts told them that we might not know? Some have said AMD are happy that Nvidia released such overpriced cards, because now they can price them the same and reap higher profits. It does make sense, but I don't think it'll help the PC market in the long term. I won't be buying a card that's no better than the competition a year later. Why would I? AMD wouldn't be giving me a reason to.
 
If it performs the same and is $50 cheaper let's say then why wouldn't you buy it? Because the architecture doesn't incentivise you? Why because it's missing Tensor cores and RT? That's a silly thing to even consider. Even RTX cards are missing that. They may have them. But they are useless and unused. There's what only enough games that use them you could count on one hand isn't there? Even after a year? What kind of incentive is that? On top of the fact that one of its major features absolutely destroys any card including the 2080ti. Using RT as an argument for a comparison between a 2070 and a 5700XT is kinda boggling. There's hardly anything that uses RT AND it tanks performance to essentially console levels. And so far the whole AI AA implementation has proven to be a blurry and worse looking overall image ON TOP OF it's limited to roughly 60fps before it gets disabled


I see no argument between architectures where Navi is competing. I see it as a non RTX 2070(because it's useless features) and a 5700XT.
It's all price/performance.

If it was a $1000 Navi cars vs a 2080ti I would be in agreement.
 
Last edited:
If the price is the same as an RTX 2070, I don't see the point for anyone to buy it over an RTX 2070 other than those who refuse to have an Nvidia card. There's not a lot about Navi 10 architecturally that I see to incentivise buyers. So rather than aiming at gamers and giving them something unique that sets them apart and their products, they're potentially aiming at AMD fans only. I don't agree with that. I could understand with Vega and how expensive it was to manufacture, and how enthusiasts would still buy Nvidia even if the competition was equal, so why undersell themselves? But Navi should not be as costly and is seemingly an architecture built for gaming across a wide range of applications. It's attempting to be an all-encompassing gaming architecture. Why price it above what most gamers can afford?

I'd love to know what's going on at the top of the AMD management pile. If they do price match Nvidia, what is their aim? What is their long term goal? What have their marketers and analysts told them that we might not know? Some have said AMD are happy that Nvidia released such overpriced cards, because now they can price them the same and reap higher profits. It does make sense, but I don't think it'll help the PC market in the long term. I won't be buying a card that's no better than the competition a year later. Why would I? AMD wouldn't be giving me a reason to.

Historically AMD have had crappy drivers at launch and then they go and improve performance over time. This means that this card could potentially end up better performing then the RTX 2070 after a few months.
That said, 200 points is within the error of margin for 3D mark runs so it could already be performing on par with the 2070 with prerelease drivers.
 
Seems to be >10% more gaming performance for >10% less money, it's not a giant value proposition but it's there.
 
I have stated this in some previous post. Wendell (level1techs), and Eposvox are recommending AMD systems (Threadripper, Radeon VII) for dedicated rendering stations do to their very good render performance, and low cost. And, even, Intel 9700K over any AMD for your editing machine due to it's smooth work flow, and realtime performance. That way you work on your new videos, while edited ones are rendering at the same time. It beats any gazillion core workstation.

Now imagine R9 3950X, 64 GB of RAM, 2x RX 5700 XT, PCIe 4.0 NVME. That thing would grind through renders for much less than the price of the competition. You can probably even slap a decent open loop on it and have money left for quite a few beers compared to the alternative.

Also with Intel's shortages of components many system builders have started offering AMD rigs. We will have tons of full AMD gaming systems. With probably better price than green/blue ones they will sell bucket loads of them. And the performance will be in the same ball park. Most of the pre-built gaming systems are in 2060/2070 range anyway. AMD will probably offer some incentive for system builders to make all AMD rigs.

AMD has a good idea here.
 
Gonna stick my neck out with some optimism and say that hopefully prices may drop quite quickly. They did on Vega after mining was over and AMD had to compete again and Nvidia have already peed on their parade so yeah, can't see prices staying like this for long.
 
If it performs the same and is $50 cheaper let's say then why wouldn't you buy it? Because the architecture doesn't incentivise you? Why because it's missing Tensor cores and RT? That's a silly thing to even consider. Even RTX cards are missing that. They may have them. But they are useless and unused. There's what only enough games that use them you could count on one hand isn't there? Even after a year? What kind of incentive is that? On top of the fact that one of its major features absolutely destroys any card including the 2080ti. Using RT as an argument for a comparison between a 2070 and a 5700XT is kinda boggling. There's hardly anything that uses RT AND it tanks performance to essentially console levels. And so far the whole AI AA implementation has proven to be a blurry and worse looking overall image ON TOP OF it's limited to roughly 60fps before it gets disabled


I see no argument between architectures where Navi is competing. I see it as a non RTX 2070(because it's useless features) and a 5700XT.
It's all price/performance.

If it was a $1000 Navi cars vs a 2080ti I would be in agreement.

I never said anything about Tensor cores or RT. I'm not incentivised to buy RTX cards either and never said I was. I used the word "architecturally" as an all-encompassing word, not just Ray-Tracing. RT or not, nothing from either company interests me personally. The 2060 is a decent little card, but it's nothing special. The 5700 and XT models appear decent on paper. $50 less potentially for roughly the same performance, it's fine. That's not an incentive for me to buy a new card though after three years. If it is for anyone else, that's OK with me. I have no qualms with that. We all have different demands and desires.

Besides, if the 5700 and XT models are $50 cheaper for the same performance, Nvidia are likely going to reduce their prices. That's what Super seems to be for. Rather than admitting to Navi's dominance and dropping prices, they're saying we'll drop prices because we ourselves have something better, not AMD.

Edit: That's if the 2070 and 2080 NON-Super versions stick around, but I've heard they're not EOL and the Super Variants are the only ones in production.
 
Last edited:
GPUs are now very mature ASICs so the gains in performance at this point is almost entirely down to transistor budget increases allowing wider archs or more cores & clock speed gains, referred to as the "Accelerator wall" by some. (Research: http://parallel.princeton.edu/papers/wall-hpca19.pdf)
d467c6fc34a93a03cf4b4866d717ff06.png

From 2011-2017 there was only about a ~1.2x performance gain on average from optimisations but a ~4.5x gain overall so more than x3.5 from the node improvements.

But not only has our dependency on Moores law for improvements increased as the field matured and R&D skyrocketed but Moores Law too has slowed and now node improvements often come with sacrifices of their own until EUV is more mature at least so I don't think it's reasonable to still expect the big jumps every 2 years of the past.

I'd think NVidia's next move would be a node shrink & optimisation of something closely resembling Turing on most units, but only once 7nm EUV has matured, so then they know they can make the jump without a big sacrifices in possible top end die size. (And similarly I think the reason we've not seen AMD do a truly big die GPU on 7nm is because 7nm can't economically do that yet)
 
Last edited:
Back
Top