Just A Thought About Nvidia Surround / AMD Eyefinity

Jeremy1998

New member
I'm probably not the first person to think of this, but...

Why do the outer monitors in a Surround or Eyefinity setup need to be at the same detail as the center monitor? In a multi-monitor setup, the outer monitors are mostly for immersion and peripheral vision, and not direct viewing. So why would they need to be rendered at say "Ultra" settings?

Wouldn't it make multi-monitor setups much more viable if you could set the center monitor and auxiliary monitors settings independently?
 
I'm probably not the first person to think of this, but...

Why do the outer monitors in a Surround or Eyefinity setup need to be at the same detail as the center monitor? In a multi-monitor setup, the outer monitors are mostly for immersion and peripheral vision, and not direct viewing. So why would they need to be rendered at say "Ultra" settings?

Wouldn't it make multi-monitor setups much more viable if you could set the center monitor and auxiliary monitors settings independently?

I'm not sure if that'd work as my understanding is that in surround the GPU is rendering one large 5760*1080 image, not three separate 1920*1080 ones. I could be wrong though
 
I'm not sure if that'd work as my understanding is that in surround the GPU is rendering one large 5760*1080 image, not three separate 1920*1080 ones. I could be wrong though

it is possible, since you can just use two different monitors without surround/eyefinity as well.
they don't do it because it would mess with the fov. if you have different monitors the smaller monitors would have a zoomed in image. you could correct that via code, but it's just not worth it.
 
You couldn't use multiple cameras from the same origin as both cameras either side would overlap the center view frustum so one viewport would have to be used.

It might be possible to only do post processing on a 1080p render target and then apply it only on the center screen space but even then, none of it is worth anyone's time.
 
I don't know if we're all thinking on the same page. I'm saying that if you had 5760x1080 running on 3 identical monitors (as you normally would) the left and right monitors could render all non-essential things (textures, water detail, ect.) at a slightly lower detail (High instead of Ultra). The render depth should probably stay the same since that would screw with things too much, but those other things wouldn't matter since you can't see crystal clear out of your peripheral vision anyways.
 
I don't know if we're all thinking on the same page. I'm saying that if you had 5760x1080 running on 3 identical monitors (as you normally would) the left and right monitors could render all non-essential things (textures, water detail, ect.) at a slightly lower detail (High instead of Ultra). The render depth should probably stay the same since that would screw with things too much, but those other things wouldn't matter since you can't see crystal clear out of your peripheral vision anyways.

So say we were to create a frustum in world space that just contains the center screen, we could then check to see if an object is outside of that frustum (but is still contained within the full screen frustum) and set a lower poly model/ lower res texture on those objects. Ok that doesn't seem to difficult.

Large bodies of water would be more difficult. One way to do this would be to check if the vertex of the water was within the view frustum, if it is then get the normals in the vertex shader (faster), else proceed to get a more accurate normal in the pixel shader (slower). This would however require a branch operation in the shader which are not particularly fast.

PostFX are what really slow games down though as they require multiple passes of the scene, stuff like DoF, motion blur, AA, bloom, SSAO, etc. They are not the simplest of things to do well and with this added complication it may actually slow them down.

This is however an interesting topic, and I'd be surprised if AMD/Nvidia haven't already tried this. It would make a great dissertation topic though.
 
NVIDIA and AMD with their triple display setups strive for game compatibility without requiring the developer to make changes to the games.

What you're suggesting would most definitely require developer participation. I think for this reason they ruled this out as the way to go because for a new technology to make waves it really needs to be as frictionless as possible.

Take 3D for example, what are the two biggest issues with that technology?
1. You need a new screen (120Hz or higher)
2. You need to wear the 3D Glasses

People are willing to buy a high frame rate display anyway because it's all around a better experience and it just works with everything. You don't need anything special to see the gains. But for 3D you do need to wear those pesky glasses which is a burden.

It's the same with 4K. On Windows it is having a difficult time because although the displays are now available Windows is still not playing great with those displays. The UI is inconsistent and some programs are so tiny on the screen you cannot use them while others scale perfectly.

Now to bring it back to the triple display thing. NVIDIA and AMD have both been hard at work over the past few years to make Surround and Eyefinity better. I'm an NVIDIA user and I can say that NVIDIA brought out a toolkit which maximises program windows to only the display panel they are in instead of across the whole desktop (filling all three displays) they centered the task bar instead of spreading that across all three displays.

AMD and NVIDIA worked hard on game compatability, they even got some developers to include native support for triple setups. Guild Wars 2 and Battlefield 3 come to mind.

But doing what you're suggesting would 100% definitely require developer participation and I think if they went that way only, not offering the current system as-well that Surround and Eyefinity would be seldom seen in games, it would be like Mantle or PhysX instead of how it is now where you can fire up almost any game you can think of and just play it in Surround/Eyefinity.

But that doesn't mean this idea isn't good, I think it should be pursued as an option for developers to make use of if their game fits. Driving games for example really don't need high details on the side panels and if a developer wanted to spend some time working on that I think they should be supported in doing so.
 
NVIDIA and AMD with their triple display setups strive for game compatibility without requiring the developer to make changes to the games.

What you're suggesting would most definitely require developer participation. I think for this reason they ruled this out as the way to go because for a new technology to make waves it really needs to be as frictionless as possible.

Take 3D for example, what are the two biggest issues with that technology?
1. You need a new screen (120Hz or higher)
2. You need to wear the 3D Glasses

People are willing to buy a high frame rate display anyway because it's all around a better experience and it just works with everything. You don't need anything special to see the gains. But for 3D you do need to wear those pesky glasses which is a burden.

It's the same with 4K. On Windows it is having a difficult time because although the displays are now available Windows is still not playing great with those displays. The UI is inconsistent and some programs are so tiny on the screen you cannot use them while others scale perfectly.

Now to bring it back to the triple display thing. NVIDIA and AMD have both been hard at work over the past few years to make Surround and Eyefinity better. I'm an NVIDIA user and I can say that NVIDIA brought out a toolkit which maximises program windows to only the display panel they are in instead of across the whole desktop (filling all three displays) they centered the task bar instead of spreading that across all three displays.

AMD and NVIDIA worked hard on game compatability, they even got some developers to include native support for triple setups. Guild Wars 2 and Battlefield 3 come to mind.

But doing what you're suggesting would 100% definitely require developer participation and I think if they went that way only, not offering the current system as-well that Surround and Eyefinity would be seldom seen in games, it would be like Mantle or PhysX instead of how it is now where you can fire up almost any game you can think of and just play it in Surround/Eyefinity.

But that doesn't mean this idea isn't good, I think it should be pursued as an option for developers to make use of if their game fits. Driving games for example really don't need high details on the side panels and if a developer wanted to spend some time working on that I think they should be supported in doing so.
Yeah, I guess it would need to be a choice for the developers to make... But I still think there should be the option for developers to add that feature. It could catch on in popular games like Battlefield and (gross, I know) COD.
 
Back
Top