Samsung 860 Pro 4TB SSD Review

I'd love that could fit all my games just about on that but I think my Mrs would kill me for spending that much money, maybe now we are getting to the max speed of sata we might see the price per GB coming down hopefully
 
I'd love that could fit all my games just about on that but I think my Mrs would kill me for spending that much money, maybe now we are getting to the max speed of sata we might see the price per GB coming down hopefully

Yeah, hopefully, price/GB will start going down this years as these new NAND facilities begin operation. Price/GB is only this high because of the huge demand recently, but thankfully not to the same extent as DRAM.

Hopefully 2018 is the year where the supply of PC components can finally keep up with demand.
 
I'd love that could fit all my games just about on that but I think my Mrs would kill me for spending that much money, maybe now we are getting to the max speed of sata we might see the price per GB coming down hopefully

We have been hitting the limit of SATA 6GB for a few years now, there is a reason why NVMe drives were developed. My 840 Evo drives can saturate a SATA6GB connection and the 840 family are at least 5 years old now.
 
I don't see many uses for this capacity. You are way better with 2TB NVME, than 4TB SATA if it is the same price. But it is good to have the option for those limited uses.
 
Great review, For my 33rd next January I'm going to ditch my last HDD and get this drive or whatever is out then in this capacity.
 
But the GPUs are still taking up the PCI-e lanes, which means you can't have a huge PCI-e SSD array without impacting those PCI-e lanes.

PLX chips are taking care of that. SSDs won't use full bandwidth 90% of the time. Full blown GPU may reserve 16 lanes on standard boards, but in real world they use maybe few % over 8X. So basically 45% of that lane remains unused. Even if you try to saturate all lanes with running benchmarks on all GPUs and SSDs PLX won't even break a sweat, because you can theoretically count GPUs to use 9 PCIE lanes, and SSD will saturate their 4X lanes only when you copy massive files, and that never happens at the same time if you don't intentionally try to pull it off. PLX will sort all that bandwidth which in normal cases reserves much more lanes than needed. It may introduce some lag, but it is minor and only if there is massive surge of data through it. (I haven't seen benchmarks or researched new chips and drivers but i hope they have sorted it out by now.)

In short. You can run all that on this board and it will be really hard (impossible) to bottleneck PCIE lanes with GPUs and SSDs.
 
PLX chips are taking care of that. SSDs won't use full bandwidth 90% of the time. Full blown GPU may reserve 16 lanes on standard boards, but in real world they use maybe few % over 8X. So basically 45% of that lane remains unused. Even if you try to saturate all lanes with running benchmarks on all GPUs and SSDs PLX won't even break a sweat, because you can theoretically count GPUs to use 9 PCIE lanes, and SSD will saturate their 4X lanes only when you copy massive files, and that never happens at the same time if you don't intentionally try to pull it off. PLX will sort all that bandwidth which in normal cases reserves much more lanes than needed. It may introduce some lag, but it is minor and only if there is massive surge of data through it. (I haven't seen benchmarks or researched new chips and drivers but i hope they have sorted it out by now.)

In short. You can run all that on this board and it will be really hard (impossible) to bottleneck PCIE lanes with GPUs and SSDs.

I always thought PCI-e lanes were set in the BIOS by the motherboard. As in, if you have more devices than the motherboard/CPU allows, the motherboard will cut the bandwidth in half or at whatever increment is appropriate. So if you have two x16 PCI-e slots and you use them for a pair of graphics cards, the system will run at x8 for both slots because you only have x16 lanes for PCI-e devices. I didn't know it would throttle your devices in a variable fashion.
 
I always thought PCI-e lanes were set in the BIOS by the motherboard. As in, if you have more devices than the motherboard/CPU allows, the motherboard will cut the bandwidth in half or at whatever increment is appropriate. So if you have two x16 PCI-e slots and you use them for a pair of graphics cards, the system will run at x8 for both slots because you only have x16 lanes for PCI-e devices. I didn't know it would throttle your devices in a variable fashion.

I had a motherboard once (Asus M3A32 MVP) that you could actually allocate the lanes to certain slots. Mostly because it supported Quadfire. Was pretty cool, but you always lost a slot completely as there wasn't enough lanes :D

So you couldn't run this...

GBx82eZ.jpg


All at the same time (two 3870x2 and a 8800U). Shame, as the Physx hack was awesome :(

BTW I know people joke about certain GPUs and heating their room in winter and ETC but I can tell you now that thing ^ was a 1200w double PSU heater.

You couldn't put the back of your hand to the vents on the back. lol.
 
Back
Top