PCI Express 3.0 x16

Personally I'd just do it the way I showed in my graphic. You'd have to raise the SSD right out of the entire bundle of GPU's and lay it probably horizontally just to fit it in the case. Not to mention you'd be using a very long cable and that will affect throughput and latency. The more noise you introduce the higher the chances that performance will go down.
 
Personally I'd just do it the way I showed in my graphic. You'd have to raise the SSD right out of the entire bundle of GPU's and lay it probably horizontally just to fit it in the case. Not to mention you'd be using a very long cable and that will affect throughput and latency. The more noise you introduce the higher the chances that performance will go down.

How would you raise that second GPU though?
Only 10 pci slots on the case.
 
Vicey's solution is probably the best or even the only one that would work but this would mean some seriou modding in order to maintain more or less clean looks
 
Well you only actually need 9 slots. 8 for the GPU's and one for the OCZ card. So you have plenty.

2 + 2 + 2 + 2 = 8 (GPU's) + 1 (OCZ SSD) = 9 total.

I'd probably fashion something out of aluminum to hold the last two cards and the OCZ SSD using the last 4 expansion brackets on the case as part of it to screw in to.

Others have done this previously: http://i.imgur.com/ZLmZe.jpg
 
But honestly if this was me, I'd dump that OCZ SSD (OCZ is trash anyway, I've had 4 of their SSD's fail on me out of 4 purchased) and I'd just get a bunch of SATA SSD's and RAID them myself.
 
Well you only actually need 9 slots. 8 for the GPU's and one for the OCZ card. So you have plenty.

2 + 2 + 2 + 2 = 8 (GPU's) + 1 (OCZ SSD) = 9 total.

I'd probably fashion something out of aluminum to hold the last two cards and the OCZ SSD using the last 4 expansion brackets on the case as part of it to screw in to.

Others have done this previously: http://i.imgur.com/ZLmZe.jpg

Have you got a link to more of this guys stuff?

And sorry I still dont see how what you drew would work - can you draw it for a 10pci layout please :)

indebted for ever to you two at the moment :D
 
But honestly if this was me, I'd dump that OCZ SSD (OCZ is trash anyway, I've had 4 of their SSD's fail on me out of 4 purchased) and I'd just get a bunch of SATA SSD's and RAID them myself.

So you dont think the enterprise RM84 would be any good at all realistically?
Good time to bring it up really - so no good? And if they are no good whats your best alternative fast reliable and nice capacity SSD for OS and scratching to (so temps etc).

No OCZ would solve a tone of these issues - but if you could still show the OCZ layout thatd be good :) hehe
 
Have you got a link to more of this guys stuff?

And sorry I still dont see how what you drew would work - can you draw it for a 10pci layout please :)

indebted for ever to you two at the moment :D

I don't have a link to more of it, but basically a university stuck a bunch of cards in a single case with an Asus P6T7 motherboard (7 PCIe slots like the Z9) and they built this aluminium cage that goes inside the case to hold all the GPU's away from the motherboard using PCIe ribbon cables.

And this is how you should be doing it:
egg6F.png


Notice I've drawn in long black screws with a blue outterline for the extended cards to illustrate how you would anchor them to the pre-existing 9 Expansion slot screw holes. You would also need however to provide stability for the cards at the opposite end (not pictured) using some kind of card holder that you would need to build yourself.

So you dont think the enterprise RM84 would be any good at all realistically?
Good time to bring it up really - so no good? And if they are no good whats your best alternative fast reliable and nice capacity SSD for OS and scratching to (so temps etc).

No OCZ would solve a tone of these issues - but if you could still show the OCZ layout thatd be good :) hehe

Simply purchase four Intel SSD's and RAID those using the motherboards built in RAID0 mode. That is pretty much all the OCZ PCIe SSD does anyway. They are made up of multiple SATA based SSD's on a circuit board with a traditional RAID controller.
 
Okay sounds like a plan - may just do two SSDs in RAID1 on the blue SATA sockets.
And then two pairs of four in RAID10 or something.

Interesting how Corsair specs say they should be faster but benchmarks say Intel. Oh well I think the above may need to be the plan.

I have to thank both of you SO much for all your help and advice.
 
Last edited:
Well I personally favor the Intel SSD's because they have the highest reliability. Last I read they have an RMA rate on their SSD's of around 0.77% - Followed closely by Crucial at around 0.80%. Compare that with OCZ who are like 6%.

Whatever you do, go for established controllers and stay away from the sandforce stuff in my opinion. They die so fast its like having a suicide bomber in your computer, it's not a question of if but when they will die.
 
Something to note about RAID and SSD's is the latest storage manager from Intel does support TRIM when SSD's are in RAID0 on the Intel storage controller. (C10R)

I don't know if they've yet released this for X79/C600/C606 but they have done so for Z77 and they have released public statements that they would be releasing it for X79. For all I know they have already done so I'm not 100% on the time scale.

So this means you will get idle cleanup beyond just the SSD's built in garbage collection resulting in longer sustained performance over the life of your deployment.
 
Something to note about RAID and SSD's is the latest storage manager from Intel does support TRIM when SSD's are in RAID0 on the Intel storage controller. (C10R)

I don't know if they've yet released this for X79/C600/C606 but they have done so for Z77 and they have released public statements that they would be releasing it for X79. For all I know they have already done so I'm not 100% on the time scale.

So this means you will get idle cleanup beyond just the SSD's built in garbage collection resulting in longer sustained performance over the life of your deployment.

Im using RAID1 (SSD) and 10 (non SSD) - so does this matter?
 
Im using RAID1 (SSD) and 10 (non SSD) - so does this matter?

As long as you put the SSD's on the Intel storage controller and not on any 3rd party chip that the motherboard may contain then it should work fine. I believe it will function through RAID0, 1 and 10.
 
As long as you put the SSD's on the Intel storage controller and not on any 3rd party chip that the motherboard may contain then it should work fine. I believe it will function through RAID0, 1 and 10.

Okay well it will all be on the Intel controller - will use the Marvell to get to my old files and hard disks etc.
 
Well I personally favor the Intel SSD's because they have the highest reliability. Last I read they have an RMA rate on their SSD's of around 0.77% - Followed closely by Crucial at around 0.80%. Compare that with OCZ who are like 6%.

Whatever you do, go for established controllers and stay away from the sandforce stuff in my opinion. They die so fast its like having a suicide bomber in your computer, it's not a question of if but when they will die.

SandForce make the Intel 520 SSD
 
Back
Top