Interesting topic.
One thing I've found regards vRam usage is that pretty much every game I have ends up using over 1,200mb after some time. This is because textures get cached in the vRam as you play so, for the most part, vRam often appears quite heavily loaded.
For the record I have a pair of GTX 570's (1,280mb vRam) and run at 1920x1200. Being in SLI I basically max everything.
Things I've directly observed:
I can be playing Skyrim / Crysis 2 / Terran Conflict (niche title) and a variety of other games and I'll see my vRam usuage climb to ~1,250mb. I can then SAVE my game (or hit a checkpoint) quit and then reload the same scene and initially see a much lower vRam usage reported. I'm loading and doing a 360 to ensure all local textures etc. are loaded to render the scene. So, in the exact same place in the game where I was previously seeing ~1,250mb vRam used, I may now see ~700mb vRam used. This proves that data not directly needed to render the current scene/area was previously held, but purged after a save/reload. Usage will increase fairly rapidly as I continue to play of course.
I also think that developers must code to take advantage of vRam, permanently caching textures that are fairly common etc. Basically making the best use of the hardware available that they can.
The upshot is, from what I've observed, is that two PC's running the same game at the same settings but on GPU's with a different amount of vRam will often show the card with the greater vRam reporting a higher usage overall. Yet both systems can give near identical FPS. My SLI 570's with 1,280mb vs my friends SLI 480's with 1536mb vRam - both clocked at 800mhz core - give FPS within a margin of error in most instances, but his cards can report a slightly higher overall vRam usage after some play time.
Interesting stuff. Adding AA and other post-effects in games can of course impact vRam usage overall, this is to be expected. Yet even testing with AA and then without in a number of titles my vRam usuage pretty much always climbs to ~1,250mb. I have however NEVER seen the full 1,280mb reported as in use on my cards, the highest ever is around the 1,260 mark but maybe that is still maxed out due to how vRam is allocated or something.
Regardless, even when my vRam usage is at its highest, I've never been in an "out of vRam" situation where FPS tanks. The card/drivers/game seem to manage usage quite well. I guess it's a little like how Windows caches stuff you
might need in system Ram so it loads faster. It makes sense that a developer would code to take advantage of vRam in this way. Also, at a driver / game level, for it to not purge data unless something current needs the space can also make things more efficient.
When I'm next at my gamer I will try to get some shots of Skyrim - I run a shed load of texture and mesh mods to make things prettier. Crysis 2 is just maxed in dx11 - I've not applied any of the texture mods to that, though I'm thinking about it.
Good work on starting this thread, I think it will paint an interesting picture of how modern GPU's utilise the vRam available to them. It also shows how GPU's which are considered to have "limited" vRam can still provide an excellent gaming experience.
One final observation, I'd be interesting if others concur here; Cards with less vRam are often low or mid-range cards these days. When increasing in-game settings these cards seem to run out of GPU power long before they run out of vRam. So, at high game settings, we start to see markedly lower FPS but
not the total tanking of FPS we'd expect in an "out of vRam" situation.
I recall, back when I'd just got my Q6600 + 8800GT 512mb (new) and my friend and myself tried to run the original Crysis at 2560x1600 the card performed admirably and the game was perfectly playable - if a little way off the holy grail of 60fps
- still, it was fine. Adding even 2x AA killed the card dead, barely 3 fps, a perfect example of "out of vRam". So, the GPU had enough power to play the game, but the extra vRam usage of just 2x AA was too much for it. Maybe a 1gb varient of the card would have taken far less of an FPS hit. An interesting comparison that would have been...
Of course there are a lot of variables, resolution, game settings, the individuals preference of smooth FPS vs. graphical features etc. but this thread should help answer the question of how much vRam is "enough" - as subjective as that question may be due to the variable involved.
As is traditional for me, another long waffle
Cheers,
Scoob.