Nvidia RTX 2080 and RTX 2080 Ti Review

He just run a few benchmarks and posts the numbers without talking too much about the details:
  • In Ghost Recon Wildlands @4K, he got 66fps on a single card and 105fps on his SLI setup (~59% more fps): https://youtu.be/cbvFb0IHTg0?t=382
  • Shadow of the Tomb Raider @4K (highest settings), he got 124fps in SLI, which is (as he mentioned) more than double the fps from his single card setup, but that's probably because his SLI setup was slightly overclocked as well. Impressive nonetheless: https://youtu.be/cbvFb0IHTg0?t=396
  • Rise of the Tomb Raider 80fps vs. 145fps in SLI (~81% more fps): https://youtu.be/cbvFb0IHTg0?t=434

Then there were a few 3DMark runs where he tries to break some records with fancy air cooling. ^_^

System was based on an Intel i9-7980XE btw.

Thanks, man. Yeah, that's very nice. Tomb Raider has often been well optimised for SLI, but I don't remember Ghost Recon being optimised for it, so those numbers are very impressive.
 
Can I just be "that guy" and say that it is not SLi any more it is NVLink. I say it because SLi means "Serial Link Interface" and it no longer uses that interface.

In some games it is not running in SLi either, not in the old way. It uses DX12 AFR.

/anal mode off.
 
It actually was a long time ago that the "SLI" interconnect was serial. Actually that was a time when 3DFX was around. SLI in the more recent years including Pascal was infact a parallel interconnect with 8 data lanes and was therefore called "Scalable Link Interface" just to keep the SLI moniker. But I guess you know that already. ;)

To add to the topic: The long and short of it is that SLI at this point seems only relevant for a few very specific games in specific resolutions. In general it is not faster or even slower than on a single card. This sure can change with drivers and game updates, but that's very unlikely to happen for existing games. On top of all that is that it is very expensive with ~$2500 for a dual-2080Ti setup with lots more power draw for not so much in return. Sad.
 
Um.. you do realize AMD has used this since like 2014 right?



Doesn't matter if I did or did not. The fact is I'm not wrong. That doesn't mean it's not the best solution going forward. However only Nvidias top end cards would run into this problem of not enough bandwidth. Hence them using NVlink. Before it did not matter. Scaling was much more likely to do with drivers and software rather than the PCI bandwidth.

You are so wrong.

It was a problem using a pair of 295x2 cards on a board without 2 x 16 PCI-E 3.0 enabled slots.

I think AMD are also looking at using bridges again on some of their future cards.
 
You are so wrong.

It was a problem using a pair of 295x2 cards on a board without 2 x 16 PCI-E 3.0 enabled slots.

I think AMD are also looking at using bridges again on some of their future cards.

You can believe your false opinion. AMD switched to running across the PCI because there is more bandwidth. Which is exactly what they said and they did. So you can argue with them if you like. If you do not want to then look it up. The old bridge connection was around 9GB/s whereas the PC was around 16GB/s if I remember right. Again look it up for the actual numbers. This was years ago they announced it. I don't remember them specifically
 
Last edited:
You can believe your false opinion. AMD switched to running across the PCI because there is more bandwidth. Which is exactly what they said and they did. So you can argue with them if you like. If you do not want to then look it up. The old bridge connection was around 9GB/s whereas the PC was around 16GB/s if I remember right. Again look it up for the actual numbers. This was years ago they announced it. I don't remember them specifically

I remember AMD stating that too. But I'm sure I have read a recent article commenting that AMD may have to go back to a cable link.

Anyways, AMD only recently phased out crossfire early 2017 wasnt it? in order to push for mGPU and DX12. Perhaps given that the mGPU idea has never caught on well, (since its on Devs to support this) coupled with the fact mixing GPU never looks pretty from an enthusiast pov, AMD are going back to their roots so to speak?
 
You can believe your false opinion. AMD switched to running across the PCI because there is more bandwidth. Which is exactly what they said and they did. So you can argue with them if you like. If you do not want to then look it up. The old bridge connection was around 9GB/s whereas the PC was around 16GB/s if I remember right. Again look it up for the actual numbers. This was years ago they announced it. I don't remember them specifically

You do realise NVidia cards also use the same PCI-E slot as AMD cards, not all the traffic goes though the SLI bridge.

Unlike you I have also tested 4 way crossfire without bridges and the performance can really suffer compared to 4 way SLI.
 
You do realise NVidia cards also use the same PCI-E slot as AMD cards, not all the traffic goes though the SLI bridge.

Unlike you I have also tested 4 way crossfire without bridges and the performance can really suffer compared to 4 way SLI.

how is the performance over 2 cards though. That is the popular CF config. Do you see any negligible differences then?
 
You do realise NVidia cards also use the same PCI-E slot as AMD cards, not all the traffic goes though the SLI bridge.

Unlike you I have also tested 4 way crossfire without bridges and the performance can really suffer compared to 4 way SLI.


Anything AMD does is better..... NBD loves snake oil
 
You do realise NVidia cards also use the same PCI-E slot as AMD cards, not all the traffic goes though the SLI bridge.

Unlike you I have also tested 4 way crossfire without bridges and the performance can really suffer compared to 4 way SLI.

That's awesome? Guess what they don't have to use it. Nvidia used the old SLI bridge. Crossfire since 2014(After further research was actually Q4 2013) has used the PCI connection. Why must you insist they don't? It's clearly obvious they do and you should know that since you must boast about your 4 way set up. It has more bandwidth and that's a fact. Don't like it I don't care but to deny the spec sheet details is just ignorance at it's finest. The whole reason AMD switched was because of the bandwidth. You cannot compare AMD vs Nvidia 4 way. Drivers are the biggest factor in determining performance. But seriously believe what you will it's just laughable you deny facts and spreed sheets

Anything AMD does is better..... NBD loves snake oil

As for you that's just a stupid comment. Are you also blind to the fact it's literally faster across the PCI buss? You should know as a reviewer.
I don't even have any AMD products. I don't care about brand loyalty. I'm just correctting a something that was false but if you all want to be ignorant about it then go ahead. Ignorance is bliss on this forum it seems

To silence the ignorance here's a link explaining the purpose of XDMA vs Bridge.
https://www.anandtech.com/show/7457/the-radeon-r9-290x-review/4
Further this article was supported by AMD here in this link: https://community.amd.com/community/gaming/blog/2015/05/11/modernizing-multi-gpu-gaming-with-xdma



You have 16GB/s across the PCI buss vs 900MB/s across the bridge. Which is not enough for 4k eyefinity. So if you still want to believe Bridge is faster I can't stop you. It's wrong but nothing I can do. I suspect in the future AMD will get some NVLink type tech to improve performance unless they stick with PCI and let the future 4.0 standard do it. Who knows. It's sad to see Tom of all people stoop so low when he should know better about this technology.
 
Last edited:
Dude not being funny but lately your aggression has been off the chart. IDK what's causing it (stress, obs) but try and chill out. Passing stress onto people isn't cool. We're your buddies here, not the enemy.
 
Dude not being funny but lately your aggression has been off the chart. IDK what's causing it (stress, obs) but try and chill out. Passing stress onto people isn't cool. We're your buddies here, not the enemy.

Not being aggressive at all. But telling someone they were wrong and telling them a fact and then getting passive aggressive responses is sure to cause tension. So I respond with the same passive aggressiveness. If they don't like it they should think about there actions. I provided all the necessary sources to support my claim and if they continue to not believe AMD themselves then it's just trolling at that rate.
 
Last edited:
No you are being aggressive dude. I've received PMS from people about it.

Just chill dude. Serious, it's not worth it. "You can lead a horse to water" comes to mind.
 
how is the performance over 2 cards though. That is the popular CF config. Do you see any negligible differences then?

Performance for two normal cards is very good.

Where it can go wrong for two cards is if they are very fast like the 295x2s and used on a board that does not have the full PCI-E 3.0 x16/x16 available for the slots, running 2160p.


The point is AMD in the future will release faster cards with performance like the latest 2080 Ti or even faster and this could cause a lot of problems without the use of bridges for a 2 way system.


NVidia have moved on to using NVLink bridges (at £75 each) for reason with a massive bandwidth increase over SLI. The other thing to remember is the 2080 Ti is also designed to use a lot more bandwidth than the 2080 reflecting the former cards higher performance. You can see the difference in bandwidth on the graph below.:)



eJtEc0D.jpg
 
Where it can go wrong for two cards is if they are very fast like the 295x2s and used on a board that does not have the full PCI-E 3.0 x16/x16 available for the slots, running 2160p.
The point is AMD in the future will release faster cards with performance like the latest 2080 Ti or even faster and this could cause a lot of problems without the use of bridges for a 2 way system.

See and this where I have the issue with. Using the bridge system would make it even worse. As per the spec sheet I linked 900MB/s is simply not enough. AMDs bridge system(before XDMA) was much slower than the Pascal bridge system. So it was and is the only way to improve bandwidth for them. I believe it was still faster than even Nvidia's HB Bridge, because that bridge did not allow 4k surround like the XDMA does. So there's still more headroom. The 295x2 is not a good example as the PLX chip itself is probably holding it back. Although after another quick review check I did not see it perform in any meaningful difference compared to just running a pair of normal cards in crossfire. Which is this claim stumps me. If you have proof of it i'd like to see it as I really don't understand where it's coming from

However as I said earlier. both will need to eventually move on to something else. Nvidia has for good reason as you say because the 2080ti is just so damn fast even PCI 3.0 x16 in SLI is probably not enough to sustain bandwidth for the cards alone and communication between them. AMD will either create a similar NVLink or use PCI 4.0 if that's a viable option, which for them it might be because there cards are much slower.
 
The NVLink interface on the 2080Ti has a theoretical bandwidth of 300GB/s. This is far more than PCIe x16 can support, even at the 4.0 variety.
OK, Nvidia is using only 100GB/s at the moment on the 2080Ti, but even that is far more than a PCIe slot can provide.

For AMD, the only way is using a similar interface if they choose to support multi-GPU in the future.
 
Back
Top