SK Hynix's GDDR6 memory will release in 2018

Basically a refined GDDR5X with a little more speed but much better voltage levels. They were able to release it pretty fast considering GDDR5x didn't come out much to long ago.
 
SK Hynix has been planning to mass produce the product for a client to release high-end graphics card by early 2018 equipped with high-performance GDDR6 DRAMs.

Haha, we all know who that is... Nvidia! :lol:
 
Basically a refined GDDR5X with a little more speed but much better voltage levels. They were able to release it pretty fast considering GDDR5x didn't come out much to long ago.

GDDR5X was made my Micron and pretty much nobody else, GDDR6 was always coming but Micron made GDDR5X to fill in the gap by making a few key changes.

GDDR5X is pretty much a refined GDDR5, doubling the prefetch size to achieve some performance gains. In theory, this was going could achieve double the performance of GDDR5 but that hs not materialised (some estimate that it will max out at around 12Gbps). With the release of the GTX 1080 it has 10Gbps chips and now with the 1080Ti it has improved to 11Gbps, yes it is much better than GDDR5, but not GDDR6 material.

Nobody has ever mentioned the power characteristics of GDDR5X, so for all we know it consumes more power than GDDR5, can't really be sure.

GDDR6 offers a proper improvement that is worthy of the name GDDR6, lower power consumption and double the performance. This is similar to the move from DDR3 to DDR4.
 
I know. All I was saying is this is a refined 5x. Which it is:p
It's not that great of a performance improvement over 5x. But it reduces power consumption. So thats refined in my book:p
 
I know. All I was saying is this is a refined 5x. Which it is:p
It's not that great of a performance improvement over 5x. But it reduces power consumption. So thats refined in my book:p

Erm, GDDR5 = 8Gbps per pin, GDDR5x 10-11.4Gbps per pin, GDDR6 16Gbps per pin (also lower power).

If anything GDDR6 is a larger improvement in raw bandwidth from GDDR5X than GDDR5X was from GDDR5. The perf/watt must also be considered, less power going to memory means more power for the GPU to use when maintaining the same TDP.

Memory bandwidth is a strange thing as you either have enough of it or you don't, GPUs don't necessarily need insane amounts of memory bandwidth, especially GPUs that are intended for 1080p or even 1440p gaming. IE, there is a reason why the GTX 1070 doesn't need GDDR5X.

The move to GDDR6 will effectively allow lower end GPU to use smaller memory bus sized, decreasing chip complexity. So new GPUs could use a 128-bit bus instead of a 256-bit bus for the same performance while also lowering power consumption and chip complexity.

In theory, a refreshed version of the PS4 Pro or Project Scorpio could, in theory, use GDDR6 and a lower bus size to get the same memory bandwidth and potentially reduce costs. The gains by GDDR6 are not just about improving the max performance potential of future hardware, but also work to make the levels of performance available today more achievable for lower-cost systems.

While I will agree that the move from GDDR5X to GDDR6 is not as high as what some users want, the simple fact of the matter is that this kind of jump in less than two years is huge. The first GDDR5X GPU with 10Gbps memory is barely a year old and they are now saying that we will have GDDR6 with a 60% performance increase in early 2018! That's huge!

Even considering the Titan Xp with 11.4Gbps memory and GDDR6 will offer just over 40% improvement, which again is huge for just a year.
 
5x is capable of 12Gbps with 12-16Gbps being targeted for 2018. They are targeting the same speeds at around the same release time. So in reality, it's a minor improvement.

Outside of that, do not expect this in lower end GPUs to use it. It will still be expensive and be reserved for high end GPUs from Nvidia, and AMD will probably stick with HBM2.

I am not saying it's not impressive, but it is not as impressive as it seems. GDDR5 first came around in 2007, but did not get to GPUs till 2008 with the HD4870 from AMD. It has been 9 years for it to come around. So no I don't think it is as impressive as you think, as it's about time and not that great of an improvement over 5X in the grand scheme of things
 
Last edited:
To be honest, I thought that that memory standard creator company (I forgot its name; I've had alcohol) didn't give anything over 5X any merit and thus favoured HBM2 more, leaning more towards NBD his arguments.
 
To be honest, I thought that that memory standard creator company (I forgot its name; I've had alcohol) didn't give anything over 5X any merit and thus favoured HBM2 more, leaning more towards NBD his arguments.

JEDEC is the one you are thinking of. And I am also sure what you say is true, that they favor HBM over GDDDR standards. Since many many companies put effort into developing this new innovative global standard to replace the aging GDDR standards.
 
JEDEC is the one you are thinking of. And I am also sure what you say is true, that they favor HBM over GDDDR standards. Since many many companies put effort into developing this new innovative global standard to replace the aging GDDR standards.

Yeah that's the one! Indeed, they, like me, believe the biggest strides can be made by moving to this new tech rather than staying with old tech which has become quite worn, proven by the small improvement increments. I'm actually surprised by this original news...thinking costs weigh in heavily here...
 
We need to remember that GDR5X has a limited shelf life, it can theoretically deliver 16Gbps, but that is a definitive end point. With GDDR6 this is a starting point.

Remember back to the HD 4870, the first GPU to use GDDR5 and it had a total memory bandwidth of 115GB/s of total memory bandwidth on a 256-bit memory bus.

Now look at the GTX 1070 that uses GDDR5 with a 256-bit memory bus and gets 256Gbps. GDDR5 got faster over time. All that has changes here is the speed of GDDR5, these are just raw bandwidth measurements after all.

GDDR5X is just GDDR5 with 2x the prefetch data for every clock, effectively allowing for double the bandwidth, in theory. This 2x is just a theoretical number and does not guarantee 2x performance, which is why the first products released were only 10Gbps. Even now they have not even got products with 12Gbps capabilities, with 11.4GHz chips being restricted to a very small pool of Titan Xp users. GDDR5X can only ever be faster than GDDR5 and just doubling a single aspect of a design will not automatically yield a 2x performance improvement.

The big thing is why would companies continue working on GDDR5X when you know that you can get faster GDDR6 memory working by 2018. This is why only Micron (the creators of GDDR5X) have ever produced GDDR5X, despite getting it standardised by JEDEC. 16Gbps is an end point for GDDR5X, but it is a starting point for GDDR6, and that is what is important here.

GDDR5X was only ever meant to be a stopgap to solve the "memory bandwidth issue" until GDDR6 could be developed.

If Nvidia is the partner who is releasing a GDDR6 product in early 2018 then it means that Nvidia will not be buying much more GDDR5X memory after the end of 2017. When Pascal is no longer produced it is likely that there will be no more products with GDDR5X, as Nvidia is their only customer and GDDR6 will likely be in much greater supply due to the number of companies who plan on making GDDR6. GDDR6 will be made by several manufacturers, which will make pricing more competitive, sing SK Hynix, Samsung and Micron will all be making GDDR6.

Once GDDR6 releases there will be no reason to support GDDR5X, yes lower end products won't be using GDDR6 anytime soon, but they won't be using GDDR5X for the same reasons.
 
You are over thinking what I am saying and kinda missing the point.
I'm just saying GDDR6 is not that impressive, or as impressive as you think it is. The point is it's been 9 years since any GDDR development has introduced any actual new GDDR memory chips. 5X is impressive purely from a speed perspective compared to what 5 was capable of. 6 is finally coming around, but from the perspective of just speeds, it's nothing that isn't crazy. You are comparing a new standard to a 9 year old standard. It damn well better​ be way faster and draw less voltage. Now comparing to just the 5X, again it's not so far away ahead of Micron. We aren't seeing faster chips because there basically is only one customer. That's why development is slow. But rest assured if HBM never entered the scene, 5X would be cheaper and faster.
Really I'm not as impressed as you are and the more I think about it, the less I get. It's been 9 years. This should be the minimum of what they could achieve. I mean sure the first 5 chips weren't all that fast, but in another 9 years, you'll say the same about the 6 due to the "6X" as an example.
 
You are over thinking what I am saying and kinda missing the point.
I'm just saying GDDR6 is not that impressive, or as impressive as you think it is. The point is it's been 9 years since any GDDR development has introduced any actual new GDDR memory chips. 5X is impressive purely from a speed perspective compared to what 5 was capable of. 6 is finally coming around, but from the perspective of just speeds, it's nothing that isn't crazy. You are comparing a new standard to a 9 year old standard. It damn well better be way faster and draw less voltage. Now comparing to just the 5X, again it's not so far away ahead of Micron. We aren't seeing faster chips because there basically is only one customer. That's why development is slow. But rest assured if HBM never entered the scene, 5X would be cheaper and faster.
Really I'm not as impressed as you are and the more I think about it, the less I get. It's been 9 years. This should be the minimum of what they could achieve. I mean sure the first 5 chips weren't all that fast, but in another 9 years, you'll say the same about the 6 due to the "6X" as an example.

TBH before now, there was not much reason for GDDR5X or GDDR6 to exist until relatively recently, with the development of 4K monitors, the release of more powerful and the increased popularity of high refresh rate gaming.

Before now manufacturers have been able to simply increase the speeds of their GDDR5, so much so that it offers speeds that were 2x what they were at launch. In those days with consoles having such a long lifespan and barely any gamers using resolutions of higher than 1080p, the need for extra bandwidth was not really there.

I do understand what you mean, but never have I seen any product roadmaps from Micron stating that they will ever make 16Gbps GDDR5X, only that that was a theoretical limit. The highest I have seen on product roadmaps is 14Gbps, but even then the highest that we have seen in products is 11.4Gbps.

Below is Micron's GDDR5X part catalog, which maxes out at 12Gbps,

https://www.micron.com/products/dram/gddr/gddr5x-part-catalog#/

What I am saying that is that with GDDR5X Micron needs to "show us the money" as it were, as if they had yield to ship kits like this they would surely rather sell it than not sell it. Recently Micron has not announced anything new in regards to GDDR5X and are now only talking about GDDR6.

Also, I think that you are underestimating the importance of low power, as power saving is why modern Nvidia mobile GPUs offer the same performance as their desktop counterparts and why Nvidia is winning in the mobile market to such a huge degree. There are plenty of products that rely on low power consumption and I imagine that the applications of that lower power consumption will be beneficial outside of gaming, perhaps in Nvidia's future mobile Smart Car/ Autonomous Car SoCs etc.

I really don't think there is much more that is worth discussing here, as whether or not it is an "impressive gain" changes from person to person. Regardless GDDR6 starts off with performance levels that are at the theoretical end point of what GDDR5X can theoretically offer with lower voltages. This effectively makes GDDR5X irrelevant, as why would you waste development time making it faster when you can spend that time making GDDR6 which can be both faster and more efficient. TBH I don't see a place in the market for GDDR5X after Q1 2018.

We can debate all day about whether or not GDDR6 offers impressive gains over GDDR5X, but the main thing is that it offers gains in performance and power, will be manufactured by several companies all of which will work to make supply and competition higher than it ever was for GDDR5X.
 
Last edited:
Back
Top