Nvidia Performance Boosting DLSS Tech arrives in Battlefield V this week

Would be nice if they could get SLI working in the game.

It can't be that difficult if you can use RTX, DLSS and SLI in the Port Royal bench.
 
Would be nice if they could get SLI working in the game.

It can't be that difficult if you can use RTX, DLSS and SLI in the Port Royal bench.

Yeah. It is strange that they do not have DX12 Multi-GPU working yet. I know it's uncommon, but DICE takes a lot of pride in their engine and its features, very strange that that doesn't extend to multi-GPU in DX12.
 
I think that's because multi-GPU only really applies to a single graphics card of the most recent models and therefore people who are buying £2400+ of GPU hardware, which will be like 0.0001% of their market.

The Radeon VII doesn't support multi-GPU and I think we can expect none of AMD or NVidia's next gen mainstream cards to support it as it's generally being deprecated, the RTX2080(Cheapest GPU released in last year or so to have mGPU support) does support it but you now pretty much always get more of a gain going to a 2080Ti instead which costs less, so you basically need to be buying two 2080Tis/Titans for it to be worthwhile atm.

I think both companies will stop releasing new multi-GPU profiles for their drivers this year and kill off traditional AFR for good until we get more mature DX12 libraries for SFR or similar for the few dedicated dev teams or just skip to unified modular architectures.

DICE have done a lot for pushing the envelope, but I don't think they've ever rolled out technology that wouldn't eventually be beneficial to consoles, the Frostbite engines shares a lot for such a well optimised cross platform engine & part of the reason it's good on PC is because it was built to squeeze the most out of the performance limited consoles. We can be fairly sure DXR and DML style techniques(Or them exactly on Xbone) are coming to next gen consoles in Navi, so it made sense they were first to DXR/RTX and got a healthy amount of time to optimise their stack for the future, just as they were the first to low-level APIs, with AFR mGPU that's more of a dying piece of tech than a progressive one though, and there's bucket loads of much more widely useable progressive techniques they can spend their time on.
 
Last edited:
Would be nice if they could get SLI working in the game.

It can't be that difficult if you can use RTX, DLSS and SLI in the Port Royal bench.

Surprised me even more given that they switched from SLI to NVlink. What was the point if "SLI" is neglected by all Devs now...
 
They didn't really make an active choice to switch, the NVLink bridges are a left over from the Quadro cards with a couple of lanes blocked off and a simple software layer for SLI to work over it, it would have cost them more to design & make a new design PCB just for the GeForce cards with traditional SLI bridges and it likely wouldn't have been worth the time.

NVLink (non-legacy/SLI use) will never see the light of day in gaming, it requires application specific support and it's an inherently mega expensive system that just doesn't make economic sense below £1000, the links alone can cost half of a mainstream GPUs price and this is probably Turings least used new feature(In terms of %age user share), which is saying something.
 
Last edited:
They didn't really make an active choice to switch, the NVLink bridges are a left over from the Quadro cards with a couple of lanes blocked off and a simple software layer for SLI to work over it, it would have cost them more to design & make a new design PCB just for the GeForce cards with traditional SLI bridges and it likely wouldn't have been worth the time.

NVLink (non-legacy/SLI use) will never see the light of day in gaming, it requires application specific support and it's an inherently mega expensive system that just doesn't make economic sense below £1000, the links alone can cost half of a mainstream GPUs price and this is probably Turings least used new feature(In terms of %age user share), which is saying something.

I simply love reading your posts! So educated and precise also you have a way of explaining things that even I get the "complicated details" right in my head.
 
Surprised me even more given that they switched from SLI to NVlink. What was the point if "SLI" is neglected by all Devs now...

From what I understand low levels API’s don’t use SLI/NVLink or Crossfire in the traditional sense. Isn’t mGPU in Vulkan and DX12 supposed to be able to use any two graphics cores, even those from different brands? If that’s the case then nVidia killing SLI and AMD killing Crossfire shouldn’t make much difference to mGPU in future games that utilise low level API’s as it’ll be down to the developers to use the API, not nVidia or AMD to support it driver side. Of course there’s still the question of “is it worth it to support such a small market share?”
 
Last edited:
DX12 & Vulkan bypass the drivers(And therefore the traditional mechanism of SLI/CFX) but can view a multi-GPU set up in "Linked node" mode which allows some degree of compatibility with traditional AFR implementations. In theory you can use two GPUs however you'd like in a DX12/Vk setup, which in theory opens the door to all sorts of exotic mixes and weird techniques. In practice, 90% of these will be incredibly unoptimal, hard to balance and take far more work to get right than they're worth. NVidia, AMD & MS have put some work into developing multi-GPU libraries(A collection of premade software for developers to use to speed things up) for DX12/Vk that re-implement more mature techniques like AFR but balancing these workloads in the game code for every type of GPU has turned out to just turn an already kinda messy solution into a much less workable one. While in theory low-level APIs have given us an oppurtunity to replace old AFR techniques with new possibly better ones, most effort is now just back to trying to get those old ones working in a basic sense first, and it's looking like both hardware vendors and software developers are just not seeing the benefits for it in gaming to chase up creating such complex infrastructure, still preferring to revert to driver-managed AFR(I guess just think of how long it took hardware vendors to get reasonable multi-GPU setups and then consider game developers are smaller teams trying to create software for far more hardware and you can see why it quickly becomes a flawed approach). This article discusses some of those techniques https://developer.nvidia.com/explicit-multi-gpu-programming-directx-12-part-2

But, part of the lack of interest is because one of the hottest topics in microprocessor design and research at the moment is multi-die single processors, like Epyc2, that use a collection of different building blocks that can be put together tightly enough that they can process a single workload coherently with only optimisation work required for the non-uniform memory access considerations. The bandwidth of links like Infinity Fabric isn't nearly there yet for carrying this tech over to GPUs, but every major tech company has announced this is without doubt the direction we're heading in over the next couple years and there's a lot of new techniques in the area of linking these dies together.
 
Last edited:
Back
Top