Cadence and Micron demo DDR5 4400MHz memory - 2019 release

That is sincerely great and everything, but what's the point other than for benchmarking (with 3200 still being the sweet spot) - unless I am a complete idiot, please educate me in that case.
 
That is sincerely great and everything, but what's the point other than for benchmarking (with 3200 still being the sweet spot) - unless I am a complete idiot, please educate me in that case.

TBH in today's consumer workloads it won't matter, but it will come into play when apps are designed to make use of more memory bandwidth.

This is where the chicken and egg problem comes in, as software won't be designed with this much bandwidth in mind until most people have access to such ludicrous speeds.

The biggest application will be servers, where increased bandwidth and capacities will offer a huge advantage. For consumers the change won't be major, TBH we would probably be better off with lower memory latencies than higher bandwidth levels.
 
TBH in today's consumer workloads it won't matter, but it will come into play when apps are designed to make use of more memory bandwidth.

This is where the chicken and egg problem comes in, as software won't be designed with this much bandwidth in mind until most people have access to such ludicrous speeds.

The biggest application will be servers, where increased bandwidth and capacities will offer a huge advantage. For consumers the change won't be major, TBH we would probably be better off with lower memory latencies than higher bandwidth levels.

Thank you for your perspectives, makes sense. I would love me some 3200 RAM with incredibly low latencies and the lower voltage that comes with the lower frequency and DDR5.
 
It has more application in laptops and mobile(tablet included) devices. The lower power and ability to have more dense DRAM dies means they can use less physical dies while also saving space/battery power.

Phones are already using LPDDR4 like the note 8. So if they can adopt the new standard and get better power consumption from it on top of using less physical dies that will further improve power consumption(example, using 2 dies vs 3 dies and the 2 dies use less power). On top of that they probably could increase battery size a little bit since less space is taken up by memory.
 
It has more application in laptops and mobile(tablet included) devices. The lower power and ability to have more dense DRAM dies means they can use less physical dies while also saving space/battery power.

Phones are already using LPDDR4 like the note 8. So if they can adopt the new standard and get better power consumption from it on top of using less physical dies that will further improve power consumption(example, using 2 dies vs 3 dies and the 2 dies use less power). On top of that they probably could increase battery size a little bit since less space is taken up by memory.

Exactly, pretty much the same reason why GDDR6 will be great for GPUs, more bandwidth for the ultra-high-end and fewer chips for the same bandwidth and less power on the low-end.

The fact that these newer standards can do more with less is often more important than the speed boosts themselves.
 
Exactly, pretty much the same reason why GDDR6 will be great for GPUs, more bandwidth for the ultra-high-end and fewer chips for the same bandwidth and less power on the low-end.

The fact that these newer standards can do more with less is often more important than the speed boosts themselves.

Yeah, although the higher frequency of GDDR6 is at least beneficial to for example gaming as improvements often come from fast memory than faster core clocks. It's why I'm more exited for GDDR6 than DDR5.
 
Yeah, although the higher frequency of GDDR6 is at least beneficial to for example gaming as improvements often come from fast memory than faster core clocks. It's why I'm more exited for GDDR6 than DDR5.

You are right there, the main thing driving DDR5 is the capacity changes that will be added to the standard. At least the high bandwidth of GDDR6 will be used by regular consumers.

That's more true in synthetics than gaming

You say that, but look at the GTX 1080 Ti at 4K. It has performance gains over the GTX 1080 that are higher than what the changes in compute performance numbers would have you expect.

On the AMD side also consider that most RX Vega 56 users recommend overclocking its memory. Memory bandwidth can have a huge impact, though it varies on a game-by-game basis.
 
For AMD they are memory starved. That's a different story.

For Nvidia it IS more true for synthetics. Nothing is linear. Just because it has x% amount more cores doesn't mean it's x% amount faster. It also has more TMUs, ROPs, and not only that but it's memory is significantly faster. OC'ing memory will get you a few hundred, if that. The 1080ti can have more than a couple GHz on a 1080. Not even comparable, hence not linear.
 
Back
Top