Samsung takes DRAM to new heights with half-Terabyte HKMG-Based DDR5 Memory Modules

This is good and very likely extremely expensive :D

But 20 years down the road this might well be needed for gaming at home, you only need to look back over the years to see where it started to where we are now :D
 
This is good and very likely extremely expensive :D

But 20 years down the road this might well be needed for gaming at home, you only need to look back over the years to see where it started to where we are now :D


It won't get that extreme unless programmers get very sloppy, Even todays games don't really use that much, Cyberpunk even with 32GB of memory available uses only around 7GB and that was at 4K max settings DLSS at performance, Even with DLSS off I didn't see any higher than 7GB, I can't imagine games will get that immense that we'll need hundreds of GB's of memory for them.
 
Sloppy programming isn't really the issue. As resolution increases all the associated assets increase like textures, geometry, shaders, video assets, audio assets, images, and how big games are getting as in world size and complexity are the biggest contribution to increase memory usage and data capacity.

Programmer's constantly hammer home memory usage, however pretty much the most ideal solutions are much easier to maintain rather than memory/logic efficient. Which means less bugs but more memory needed which is obviously more ideal. I wouldn't consider that sloppy. Less bugs or easier to fix bugs is way more preferable

Memory leaks are a problem though but thankfully not super common.
 
Sloppy programming isn't really the issue. As resolution increases all the associated assets increase like textures, geometry, shaders, video assets, audio assets, images, and how big games are getting as in world size and complexity are the biggest contribution to increase memory usage and data capacity.

Programmer's constantly hammer home memory usage, however pretty much the most ideal solutions are much easier to maintain rather than memory/logic efficient. Which means less bugs but more memory needed which is obviously more ideal. I wouldn't consider that sloppy. Less bugs or easier to fix bugs is way more preferable

Memory leaks are a problem though but thankfully not super common.


There is a big difference in well done code and code done in a sloppy manner, The sloppy code can and will take up a lot more memory, That's what I'm referring to, Not the developers who actually do a good job, Yes memory usage will go up as time moves on but I highly doubt we'll all be using 1TB+ in our systems in 1-2 decades, 128GB for an "average" system ? Possibly, But I cannot see everyone on average rocking 1TB.
 
My point was there isn't a lot of widespread sloppy programming. Everything that is increasing now is a result of what I mentioned before and using more readable code instead of the most efficient possible isn't sloppy.

You can have sloppy code and not take up much more memory, situational obviously. It won't be as efficient during runtime but a few MB of memory in the grand scheme of things isn't a lot for most applications. As long as it's not so egregious that you are reserving a ton of memory for no reason or using outdated practices(which I highly doubt any modern game uses), all the current methods basically require you to delete after creating. I'm not a C++ expert by any stretch of the imagination but there's so much resources online for limiting memory usage.

Sloppy code is a thing, but sloppy doesn't automatically mean least efficient. It's better to be less efficient but easier to read than the opposite. Obviously the goal is both but rarely is that ever going to happen

Sloppy code would be horribly broken, ie AC Unity or in a more recent example memory leaks with Cyberpunk having tons of them.

Idk about 1TB memory but I could definitely see it within 20 years. I don't know about browsers using that much but big applications sure why not.
 
Last edited:
I don't see how we'd reach 1TB usage in consumer space ever, VRAM usage will likely keep climbing, but not to such levels. In server space, sure, we're already there. Disk caches, databases ran in RAM etc are widely used.

But in terms of games, we're already at the point of diminishing returns when it comes to resolution. I suppose going from 4K to 8K will happen at some point as processing power keeps climbing, but after that it seems completely pointless, as our eyes can only see so well.

But today's layers upon layers of abstraction is definitely causing a fair bit of strain on memory. Web browsers rival operating systems in complexity, and with frameworks like Electron many apps are essentially fancy web pages.

Only way I see (V)RAM requirements explode is some new tech which requires it.
 
It's definitely a long shot but it's also definitely in the realm of possibilities. New technology and ways of interaction with technology will dictate memory usage.
 
Back
Top