Hello,
I have recently acquired 2 Tesla K20 NVIDIA GPU and would like to build a high performance PC to host them.
My computational needs are:
*) LINUX.
*) running my own neural network (NN) code in C - usually i have 10s or even 100s of different independent neural networks to run.
*) a lot of text processing for preparing the input and output data files (100 megabyte max each) using sed, awk, grep and other unix command line cmds. Preprocessing is glued together with perl or bash scripts.
*) Plotting data using R or gnuplot.
*) Maybe MySQL database but that can go on separate dedicated box.
*) No fancy graphic needs or video streaming etc.
(NN are nicely and efficiently ported to GPUs)
The most intensive app is the NN code which I plan to port it to the GPUs.
What I have in mind is to use the GPUs for running the NNs and the CPU for text-processing, R or gnuplot as thus:
The pipeline (as a perl/bash script) starts on a single thread on the CPU, process the data files and writes them to temporary files to the hard disk or Linux ramdisk, then runs a NN by spawning portions of code to the GPU, etc. (not sure yet about this). When NN finishes, results are processed again on the CPU and hard disk, ploted etc. and the pipeline finishes. I can have many pipelines running in parallel on the available threads of the CPU.
I don't use commercial, closed-source software, meaning that my pipelines/code can utilise all the threads of my CPU by either running separate versions of the same software on each thread.
Alternatively, I will run a single C program using pthreads to spawn functions in parallel (i say this because a lot of CPU benchmarks run single-thread tests and multi-core processors do badly for their price) which will need to read huge shared memory and sending back small-size results using say MPI.
So in either way, I have high utilisation of threads on CPU and have portions of code which can go to GPU. I write my own C programs and maybe I can get compiler to use the extra AMD instructions for more efficiency.
The first question is do I benefit from a motherboard with 2 cpus and the 2 GPUs all on a single PC or shall I do 2 PCs each with a cpu and a GPU. The parallelism I need is such that I wont need shared memory so not too much bandwidth between them. So either solution is OK but i prefer 1 PC for ease of transport etc and I guess less cost.
Since I am constrained on money and space and watt, I would like to build a single workstation on a single motherboard hosting 1 or 2 or (max CPUs?) high-core cpus (e.g. AMDs FX-8350, 8cores, 155 uk pounds) and large memory (max RAM?)
Finally, if all goes well, in 12 months, I may get two extra Tesla K20 - will the system be able to host these as well? I assume that a mobo with 4 PCI slots can exist and can take the 4 GPU cards, but will they fit? be powered ok?
CPU:
For the reasons above I would like CPU to give me lots of threads, and be cheap. I thought the AMD FX-8350 had price i could afford for what I got according to the benchmarks (e.g. http://www.cpubenchmark.net/high_end_cpus.html#cpuvalue). The Intels were ridiculously expensive (actually the AMDs ridiculously cheap).
Do any of you experts of homebuild systems have some serious objections about the choice of CPU? Would you prefer an Intel 4-core, say i5,
which is only 10% more expensive?
Based on the choice of the AMD FX-8350 (or your suggestion), can you please suggest:
1) MOBO: a lean, mean, cheap one which can host 2 NVIDIA GPUs (SLI may come handy) and ideally to have on-board simple GPU for the needs of the OS (Linux, no games, no fancy apps, etc.), simple network and sound cards, support for 2,3 3TB hard disks (say sata 6GB/s?) and top-of-the-range USB data transfers for adding external data storage if need be AND max data-transfers to the GPU cards. Is there a mobo to be able to host 2 AMD CPUs? (no overclocking, no fancy gimmicks for me, just fast transfers and max slots).
2) PSU: each K20 GPU has max power consumption of 225W. The AMD FX-8350 is at 125W. So depending on 1 or 2 CPUs, I think 850W or 1050W? Then XFX or Corsair Enthusiast depending on cheap deals at the time. Any objections/suggestions?
3) COOLING: the GPU cards come with their own fan (that's what the spec says, whether it is enough to dissipate 200W of heat is another question). Do I need extra cooling you think? Will computer case standard fans (say 3, plus PSU's) be enough? I can install extra fan. Do I need water cooling? I do intend to run the system in hot climate, summer is about 35C=95F. Is water cooling cheaper at the end, more quiet and *safe*?
4) RAM: I would like to go for the maximum RAM I can slot on that board and be fast ram at that (1066?). Of course price is still an issue, but I can utilise all the RAM by creating linux ramdisks and storing temp files there.
Please tell me your opinion about using 2 or more cpus on the same board. I guess some of you may suggest a cluster of workstations also in view of the fact of acquiring an extra 2 GPUs, but then we get into other problems of power and space. (is there a high-end solution, e.g. servers, to this which is affordable?)
I live in the UK and usually buy from http://www.dabs.com
Thanks, any advice, hints are welcome.
P.S. I started realising that I am in deep water here with all these choices, so if you also happen to know a small, honest group/company who do custom PCs and know their stuff please let me know as well (UK).
thanks boy and girl - overclockers.
I have recently acquired 2 Tesla K20 NVIDIA GPU and would like to build a high performance PC to host them.
My computational needs are:
*) LINUX.
*) running my own neural network (NN) code in C - usually i have 10s or even 100s of different independent neural networks to run.
*) a lot of text processing for preparing the input and output data files (100 megabyte max each) using sed, awk, grep and other unix command line cmds. Preprocessing is glued together with perl or bash scripts.
*) Plotting data using R or gnuplot.
*) Maybe MySQL database but that can go on separate dedicated box.
*) No fancy graphic needs or video streaming etc.
(NN are nicely and efficiently ported to GPUs)
The most intensive app is the NN code which I plan to port it to the GPUs.
What I have in mind is to use the GPUs for running the NNs and the CPU for text-processing, R or gnuplot as thus:
The pipeline (as a perl/bash script) starts on a single thread on the CPU, process the data files and writes them to temporary files to the hard disk or Linux ramdisk, then runs a NN by spawning portions of code to the GPU, etc. (not sure yet about this). When NN finishes, results are processed again on the CPU and hard disk, ploted etc. and the pipeline finishes. I can have many pipelines running in parallel on the available threads of the CPU.
I don't use commercial, closed-source software, meaning that my pipelines/code can utilise all the threads of my CPU by either running separate versions of the same software on each thread.
Alternatively, I will run a single C program using pthreads to spawn functions in parallel (i say this because a lot of CPU benchmarks run single-thread tests and multi-core processors do badly for their price) which will need to read huge shared memory and sending back small-size results using say MPI.
So in either way, I have high utilisation of threads on CPU and have portions of code which can go to GPU. I write my own C programs and maybe I can get compiler to use the extra AMD instructions for more efficiency.
The first question is do I benefit from a motherboard with 2 cpus and the 2 GPUs all on a single PC or shall I do 2 PCs each with a cpu and a GPU. The parallelism I need is such that I wont need shared memory so not too much bandwidth between them. So either solution is OK but i prefer 1 PC for ease of transport etc and I guess less cost.
Since I am constrained on money and space and watt, I would like to build a single workstation on a single motherboard hosting 1 or 2 or (max CPUs?) high-core cpus (e.g. AMDs FX-8350, 8cores, 155 uk pounds) and large memory (max RAM?)
Finally, if all goes well, in 12 months, I may get two extra Tesla K20 - will the system be able to host these as well? I assume that a mobo with 4 PCI slots can exist and can take the 4 GPU cards, but will they fit? be powered ok?
CPU:
For the reasons above I would like CPU to give me lots of threads, and be cheap. I thought the AMD FX-8350 had price i could afford for what I got according to the benchmarks (e.g. http://www.cpubenchmark.net/high_end_cpus.html#cpuvalue). The Intels were ridiculously expensive (actually the AMDs ridiculously cheap).
Do any of you experts of homebuild systems have some serious objections about the choice of CPU? Would you prefer an Intel 4-core, say i5,
which is only 10% more expensive?
Based on the choice of the AMD FX-8350 (or your suggestion), can you please suggest:
1) MOBO: a lean, mean, cheap one which can host 2 NVIDIA GPUs (SLI may come handy) and ideally to have on-board simple GPU for the needs of the OS (Linux, no games, no fancy apps, etc.), simple network and sound cards, support for 2,3 3TB hard disks (say sata 6GB/s?) and top-of-the-range USB data transfers for adding external data storage if need be AND max data-transfers to the GPU cards. Is there a mobo to be able to host 2 AMD CPUs? (no overclocking, no fancy gimmicks for me, just fast transfers and max slots).
2) PSU: each K20 GPU has max power consumption of 225W. The AMD FX-8350 is at 125W. So depending on 1 or 2 CPUs, I think 850W or 1050W? Then XFX or Corsair Enthusiast depending on cheap deals at the time. Any objections/suggestions?
3) COOLING: the GPU cards come with their own fan (that's what the spec says, whether it is enough to dissipate 200W of heat is another question). Do I need extra cooling you think? Will computer case standard fans (say 3, plus PSU's) be enough? I can install extra fan. Do I need water cooling? I do intend to run the system in hot climate, summer is about 35C=95F. Is water cooling cheaper at the end, more quiet and *safe*?
4) RAM: I would like to go for the maximum RAM I can slot on that board and be fast ram at that (1066?). Of course price is still an issue, but I can utilise all the RAM by creating linux ramdisks and storing temp files there.
Please tell me your opinion about using 2 or more cpus on the same board. I guess some of you may suggest a cluster of workstations also in view of the fact of acquiring an extra 2 GPUs, but then we get into other problems of power and space. (is there a high-end solution, e.g. servers, to this which is affordable?)
I live in the UK and usually buy from http://www.dabs.com
Thanks, any advice, hints are welcome.
P.S. I started realising that I am in deep water here with all these choices, so if you also happen to know a small, honest group/company who do custom PCs and know their stuff please let me know as well (UK).
thanks boy and girl - overclockers.