Is SLI still worth it?

I want to upgrade so I can get more FPS in games and I know it's better to have one top of the line card over to lesser cards but I can't afford a 2080ti but can probably afford another 1080, but just want to know if there are any real big issues doing it?
 
Depends on the game. Frostbite engine still supports it so the likes of BF5 does allow SLI (recently disabled but I know a work around)

Witcher 3 still supports SLI, tombraider etc etc.
Ubisoft games dont support it, and Anthem will in the near future.

I have 2x 1080ti with SLi enabled, when it works its great, when ti doesnt I lose an fps or two from single card running with second idle.
 
Personally I'd recommend going to a 1080Ti instead if that's the best single card choice as SLI doesn't work with many DX12 or Vulkan titles and even some modern DX11 titles have little to no support, while generally single card solutions are a lot more efficient & depending on case/cooling more quiet.
But even when it does work you'll only get the frame/input latency of a single card with SLI set ups and at the FPS range where they're most stable and least prone to microstutter you're usually it much longer frame time periods.

I'm not sure if this is useful but I made a little picture a while ago to help explain the difference in latency between say 40fps on a single card and on two cards.
3e659ba19e3f0c6883e25effc71aec40.png
 
With SLI you will never get perfect efficiency in 99.99999999% of games. The ones that do... well I can't even think I've ever seen one tbh. You're likely to get anywhere from 70%-90% efficiency on the few games that use it well.

So you pay double the price for NOT double the performance. You pay for double the power consumption for NOT double the performance increase. You get twice the noise volume for NOT double performance or power consumption. You add more cables to manage and decrease the airflow overall(not all cases will be effected depending on size).

So really all those cons for basically two pros:
1) looks cool
2) benchmarking

I'd just rather have 1 better card. Won't deal with any of the problems and you'll have a more quiet PC alongside better performance/watt.

I'd say you'd be better off selling your 1080 and using that money towards a 2080 if you cannot afford a 2080ti.
That said I wouldn't even bother to upgrade unless you are disappointed with your current performance. . If all the games you play support SLI well and have little issues, then why not I suppose.
 
Last edited:
Personally I'd recommend going to a 1080Ti instead if that's the best single card choice as SLI doesn't work with many DX12 or Vulkan titles and even some modern DX11 titles have little to no support, while generally single card solutions are a lot more efficient & depending on case/cooling more quiet.
But even when it does work you'll only get the frame/input latency of a single card with SLI set ups and at the FPS range where they're most stable and least prone to microstutter you're usually it much longer frame time periods.

I'm not sure if this is useful but I made a little picture a while ago to help explain the difference in latency between say 40fps on a single card and on two cards.
3e659ba19e3f0c6883e25effc71aec40.png

Lol looks like a copy of the one Nvidia made a year ago ;)

Does make me wonder though if Nvidia has shot themselves in the foot a little. With the fading support from developers to provide decent SLI performance, we are opting for single card solutions. That means that there is a fall in double purchse sales. On top of that, I would have thought single card sales would also fall as users will feel that a 30% bump over the previous gen is not worth upgrading for, thus skipping the current gen to wait for the next.

In the past, users would buy one or two. For those buying one card, they would buy another later on, knowing that the 2card solution would outperform next gen of same level e.g. 980 to 1080. It feels like Nvidia knowingly see that SLI is no longer favourable, and have intentionally pushed up the price of their RTX cards to squeeze as much as they can from us from those only going with single card setup. Had SLI/Nvlink been the go to recommended setup, most likely you would have seen a lower costing generation of cards.

I know RTX involved millions in R&D, but I am no longer buying the fact that this is the reason for the high cost.
 
Last edited:
I'd sell the 1080 and buy a 2080, though that's with the local prices - might be totally different for you. 1080 Ti are scarce and more expensive than the 2080 which is IMO the better card of the two.
 
I think it's actually better for NVidia now, with the 1000 series if someone wanted to spend £1k+ or so on GPUs they'd probably have to go SLI. But now NVidia have single card products going way upto £1.1k or whatever, which must have much better margins than two cards because of the manufacturing costs and stuff, as long as GPU dies keep scailing upwards more aggressively it should be ok. Yeah a 2080 is a good choice too or VII depending on what you're going for and local prices.

Also I know I've covered this alot but the RTX2080Ti die is way way larger than a 1080Ti die, they genuinely moved the product stack into a category that consumer gaming cards had never reached before (700mm+ dies). The only similarly sized card before it was the £3000 Titan V, SLI dying isn't the reason the 2080Ti costs so much, it's because it's of the absolute limits of silicon manufacturing(Which is a first for a gaming card really), maybe the death of SLI is partly what motivated them to push for top class single die silicon in consumer markets though.
 
Last edited:
The £2800 Titan V dies pack quite a bit more transistors than Turing.

GV100 = 21.1 billion transistors and die size 815mm2

TU102 = 18.6 billion transistors and die size 754mm2

Volta needs the extra due to its DP cores.
 
The 1080Ti had a 471mm^2 die and 12Bn transistors, that difference(~5% die size) is quite small in the grand scheme of things(Especially when we're talking about a 65% jump).

You gotta remember we're talking about two dimensions here too, a 775mm^2 die vs a 825mm^2 die is the difference between say a 20mm x 41.25mm die and a 20mm x 39.25mm die.

(Volta's cores are loosely the same size as Turing's[Due to the RT cores being comparable in size to the set of DP cores], it just has more CUDA cores (~11% more cores for ~13% more transistors)).

The huge size of those RT cores(1 per SM) is partly why the RTX2080 (545mm^2) is 16% larger than a 1080Ti while packing almost 20% fewer CUDA cores. Turing cores are absolutely giant compared to Pascal, and must be within a hair(literally physically) of Volta.
 
Last edited:
Back
Top