Micron unveils specifications of GDDR5X chips

WYP

News Guru
Micron has unveiled the specifications of their GDDR5X, giving us a glimpse of what to expect from the new memory standard on future GPUs.

30082429719l.jpg


Read more on Micron's GDDR5X Specifications.
 
From what I see, for now at least; is that there is only going to be 256-bit and 128-bit GDDR5X.

Yep but the 384GB/s memory on the 256 bit bus could probably be pushed over 400GB/s with a decent overclock, Hell the 980 Ti's memory bandwidth can be pushed from 336GB/s to 384GB/s i.e 7GHz-->8GHz.
 
Yep but the 384GB/s memory on the 256 bit bus could probably be pushed over 400GB/s with a decent overclock, Hell the 980 Ti's memory bandwidth can be pushed from 336GB/s to 384GB/s i.e 7GHz-->8GHz.

I read your original post incorrectly, apologies. I see what you mean. :)

*Edit*

To clarify; in my mind I read 384-bit and not 384GB/s :)
 
Last edited:
Doesn't really have a future though. At least for AMD. Considering they are using HBM2 for Vega and 5X won't be available until launch or after launch of new GPUs. Nvidia would probably follow the same route too.
 
Doesn't really have a future though. At least for AMD. Considering they are using HBM2 for Vega and 5X won't be available until launch or after launch of new GPUs. Nvidia would probably follow the same route too.

It all comes down to pricing and bandwidth requirements really. If GDDR5X is cheaper and less expensive to use it will continue to be used in the lower end of the product stack.
If HBM proves to be just as cost effective, after accounting for development cost and implementation (all that interposer magic cannot be that cheap right now), then HBM might see it used in the entire product stack on next gen parts.

What interests me is what else AMD can do with an interposer. AMD could potentially place two GPU dies on the same interposer to create a dual GPU die that can act as a single unit. If AMD could achieve this then they could use multiple cheap to produce/ high yield GPU dies for the high-end products.
 
It all comes down to pricing and bandwidth requirements really. If GDDR5X is cheaper and less expensive to use it will continue to be used in the lower end of the product stack.
If HBM proves to be just as cost effective, after accounting for development cost and implementation (all that interposer magic cannot be that cheap right now), then HBM might see it used in the entire product stack on next gen parts.

What interests me is what else AMD can do with an interposer. AMD could potentially place two GPU dies on the same interposer to create a dual GPU die that can act as a single unit. If AMD could achieve this then they could use multiple cheap to produce/ high yield GPU dies for the high-end products.

Wouldn't multiple GPU's stacked create immense heat buildup?
 
Wouldn't multiple GPU's stacked create immense heat buildup?

Not stacked, side-by-side. Imagine the fury X die with its four HBM chips, but instead of making a single large GPU die that that was connected to the memory via an interposer like HBM. This is totally how I see things moving, especially with Raja talking about the "economies of the small die" so much recently. Remember that an interposer has uses beyond HBM.

Two GPU dies on a single interposer could act as a single unit if done right, It wouldn't be crossfire because the two GPUs actually are merged together with an interposer to form one big chip.
 
Not stacked, side-by-side. Imagine the fury X die with its four HBM chips, but instead of making a single large GPU die that that was connected to the memory via an interposer like HBM. This is totally how I see things moving, especially with Raja talking about the "economies of the small die" so much recently. Remember that an interposer has uses beyond HBM.

Two GPU dies on a single interposer could act as a single unit if done right, It wouldn't be crossfire because the two GPUs actually are merged together with an interposer to form one big chip.

Oooohhhh, I think I follow now; and if I do, then yes, that would indeed be interesting! Basically all the advantages of dual GPU cards without the hiccups it can cause, right?
 
Back
Top