Go Back   OC3D Forums > [OC3D] General Forums > OC3D News
Reply
 
Thread Tools Display Modes
 
  #21  
Old 06-12-18, 04:37 PM
AlienALX's Avatar
AlienALX AlienALX is online now
OC3D Elite
 
Join Date: Mar 2015
Location: West Sussex
Posts: 11,596
Fury and Vega were AMD's Fermi. In a funny twist of events Nvidia are back to making Fermi and AMD are about to make Kepler.

And you know what? I foresee a massive comeback for AMD. All of this BS that heavy GPUs are supposed to do are limited to PC only. Meaning no one bothers to code for them.

/neck coming out.

I suspect that if AMD GPUs were coded for to utilise them properly then Vega 64 would have been almost as fast as the 1080Ti. However, due to no one bothering it isn't anywhere close.

I would also surmise that if people coded with the Fury in mind it too would have been a lot better (IE limit the game's engine to using 4gb or less). Thing is? Nvidia already had a 12gb consumer GPU out there so why give a F?

All of these supposed claims about "Oh it's HBM so don't worry if you think it's not enough because it is" all turned out to be false and when the card ran out of VRAM it then ground to a halt, before eventually crashing your PC. I know, I saw it in BLOPS III loads of times !

Then AMD say they have fixed it so I set to work trying to figure out this "fix". Well, by now I knew exactly what levels and what scenes made the VRAM buffer overflow, and so I tried them (with the same settings hacked into place because the game disables them when it sniffs the VRAM) again. The game no longer crashed, woohoo right? well no. After some investigation of my PC's resources it was clear that the card was "texture streaming" from my paging file on the hard drive. And as you can imagine this is as slow as the slowest thing in the world. FPS dropped from 60s to low 20s.

At least with Vega they sort of bothered putting more on. I still wouldn't buy a Vega card though.

__________________
He used to do surgery
For girls in the eighties
But gravity always wins



Reply With Quote
  #22  
Old 06-12-18, 05:07 PM
AngryGoldfish's Avatar
AngryGoldfish AngryGoldfish is offline
Old N Gold
 
Join Date: Jan 2015
Location: Ireland
Posts: 2,586
Quote:
Originally Posted by Kaapstad View Post
I have seen these comments about how little power HBM uses compared to GDDR.

I don't think these people actually know what they are talking about to be frank.

I would agree there is a very small drop in power usage with HBM but not that much.

With the arrival of the 2080 Ti and RTX Titan it goes someway to proving that GDDR does not use much more power than HBM. If you compare the Titan V and the Turing cards the former has a slightly bigger core and the latter cards clocks higher but the actual difference in TDP between these 12nm cards is about 30watts.

https://www.techpowerup.com/gpu-specs/titan-v.c3051



https://www.techpowerup.com/gpu-specs/titan-rtx.c3311



I think the problem with AMD using HBM was it seemed like a good idea at the time and it looked like the way things would go in the future but unfortunately a lot of the promises and claims for the new memory did not materialise.



I have used a number of HBM equipped cards and they all tend to perform the same way.


1. They throttle a bit at 1080p compared to GDDR cards.


2. They are better at 1440p and 2160p where bandwidth counts for more than high clockspeed.



I would also agree with what other people have said that for professional use HBM is also very good.
Do you know who I'm talking about, the guy with the crazy hair? It's the dude who quite literally takes graphics cards apart and reviews them based on the component choices. The dude who overclocks using LN2. The guy who buys old GPUs and destroys just to learn why. The guy who writes articles for one of the biggest PC hardware sites in the world. I don't like the fallacy of, 'But bro, he's, like, totallyu so much smarter than you so you're wronggggg, brah'. But at the same time, I can't think of a more fitting response. The article breaks down the math. He's not just compared two TPU reviews out of context. The 16GB of HBM2 on the Vega FE card was tested using a DMM to be drawing no more than 20-30W. To reach the same kind of bandwidth as HBM2 that AMD have used in the past with GDDR5 (512-bit), power draw from the memory would go up to 80-100W. The memory bus of the 1080Ti (352-bit) would draw 60-70W of power. That's over double the power draw of HBM while considerably reducing bandwidth. GDDR6 wasn't available at the time of Vega so we're comparing the bandwidth of GDDR5 or GDDR5X and against HBM2.
__________________
ASUS X370 Crosshair VI Hero ⁞⁞ Ryzen 1600X 4Ghz ⁞⁞ Thermalright Le Grand Macho RT ⁞⁞ Aorus GTX 1080 11Gbps ⁞⁞ G.Skill TridentZ 3200Mhz
Jonsbo W2 ⁞⁞ Corsair AX760 ⁞⁞ Pexon PC ⁞⁞ Samsung 960 EVO 250GB & 850 EVO 500GB
⁞⁞ Western Digital 1TB Blue & 3TB Green
BenQ XL2730Z ⁞⁞ Mixonix Naos 7000 ⁞⁞ Corsair K70 Cherry MX Brown ⁞⁞ Audio-GD NFB-15 ⁞⁞ EVE SC205 ⁞⁞ AKG K7XX
Reply With Quote
  #23  
Old Today, 04:39 PM
BigDaddyKong's Avatar
BigDaddyKong BigDaddyKong is offline
OC3D Elite
 
Join Date: May 2013
Location: USA
Posts: 1,085
AMD had problems they could never make a GDDR memory controller worth a darn. Don't forget they have total target the GPU power draw. If they would have used that 80-100W in memory, then they would have had to cut down on the core speed to lower power usage to stay in the target zone.

What really hurt was HBM pricing and availability. It cost almost double what GDDR costs. That added to the overall GPU price that had to be passed to the consumer. When Fury launched, HBM1 was in very short supply.

What I hated was how fast they abandoned us Fury owners. Driver improvements were nonexistent after 6 months, and everything went into improving 580 drivers when it launched.
__________________
AMD R7 1800X-Asus Crosshair VI Hero-16GB GSkill 3200mhz-Corsair HX850-H110i-MSI 1070-Coolermast H500-Windows 10 Pro-Dell U3011-Dell 2007FP
Reply With Quote
  #24  
Old Today, 05:31 PM
firefly firefly is offline
Member
 
Join Date: Nov 2005
Posts: 152
I don't think it's far fetched at all that the navi 3080 will compete with the rtx 2070.
Nvidia has made some impressive technological advances with their new cards but all they've really achieved, with existing games, is to match the price performance points of the outgoing 10 series, which is pretty underwhelming.
When the 10 series came out, the 1070 was matching and beating the outgoing 980ti and titan cards for a lot less money.
So this stuff about a $250 card beating a $500 card is nonsense because the $500 shouldn't cost that - to match the gains of the 10 series it should be called the 2060 and be priced accordingly (and the 2080 should be the 2070).
Nvidia are selling a mid range card for high end money.
AMD have been consistently a generation behind in recent years, but Nvidia have effectively taken a step back and allowed them to catch up.
AMD navi will have a die shrink to 7nm which is significant - that will allow them to pump more clock cycles through them. All they have to do is match the 2070/1080/vega 64.
They've already managed vega 64 speeds. The navi chips will have 40 compute units vs 64 for the vega. But they have a massive die shrink, so they can run faster.
I heard that to beat the 2070 they'll have to run at over 2k mhz - well they managed to get the 590 running at 1600mhz vs 1300 ish for the 580, simply on a die shrink from 14 to 12 nm - the 590 is exactly the same as the 580, board partners can drop a 590 chip into a 580 pcb and they have a 590 card.
It's clearly a stop gap card - stop gap for what? well navi of course.
OCUK have been clearing out vega 56 and 64 cards pretty much at cost - sapphire have pretty much told them that they won't be making any more of them.
If amd and the likes of sapphire are clearing out their vega components, they're clearing them out for a reason.
If the Navi cards are going to offer similar performance but at prices where they can actually make a profit from them, well, why wouldn't they.
__________________
Intel i7 5820k. Samsung 951 m2 nmve 512gb. Gigabyte x99 gaming 5p. Team vulcan 16 GB DDR4 3000. asetek aio 240 cooler. Sapphire r9 390 Nitro OC backplate. NZXT Hue+ lighting, Enermax Galaxy 850w. SPL Crimson Audio Interface
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump










All times are GMT. The time now is 06:12 PM.
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2018, vBulletin Solutions, Inc.