Question of processability

Rastalovich

New member
If u take 3 processors: (Intel/AMD/whatever) Single, dual and quad.

Put each in a desktop rig that is identical each time. Install u`r OS and run a benchmark like SuperPI - and say the results at stock for all 3 were 10.00 for 1M +/- 0.05%. Essentially showing yourself that under a condition where core_0 of each of the cpus is/are identical.

Keeping them @ stock, drop them into a server environment, nothing to do with virtualization, something simple like managing a handfull of terras of shared drives, it`s security, a viral sweep at low access times and backup/maintenance etc. Arguments sake Win2k3.

Looking at the performances of the server, with each of these cpus, given maybe a week each in an office/building of 10-20 users.

Will there be any noticeable difference ?
 
Wouldn't a quad be better suited for the job?

Would be much better at multi-tasking. So like one core does back up.. one does networking.. and so on. :)

Thats my opinion. :D
 
Yea im on the same lines as tox as most of the time there will be more than one or even two high cpu useage tasks going on at the same time
 
I think in terms of noticing a difference, u have to think as a user accessing the files.

What do u feel the servers biggest cpu % useage may be ?
 
This is all a bit vague...

A quad core would be the best for a server, for obvious reasons - multi-tasking.

The servers max cpu usage? :eh: 100%...
 
And how many users? I mean it's not really a specific question.

If all three had the exact same usage and it was little enough for the single core to cope with, then the quad would have the least usage
 
It seems to me that for a file server the main bottleneck/performance concern is going to be hard disk speeds rather than cpu. Unless your IO devices use a significant amount of CPU (think older drives in PIO mode instead of UDMA). I'm not sure what sort of CPU usage a RAID array causes, nor am I sure what CPU usage a network attached storage device uses.

If we imagine a simple FTP server, for example. Typical architecture for that would be to have a main thread of execution "listening" for connections on a specific address and port. When a connection arrives a sub-thread is created (or re-used) and in this thread the connection is handled.

I imagine copying files using windows explorer has a similar architecture to what I described for the FTP server. The main process explorer.exe handles the copy requests but creates sub threads to handle the actual copy of data.

Therefore you could say that generally a multi-core CPU will have an advantage over a single core system simply because it can handle more connections simultaneously, devoting more CPU time to each.

That said, if the speed of the hard disks, or speed of the network connection are slow enough then these bottlenecks will prevent the cpu from ever reaching 100% usage, and in such a case the extra cores wont benefit you much.

So, if your IO devices require a lot of CPU to operate at full capacity, or your IO and network speeds and high, then you will want more CPU power in order to reach maximum potential.

If your IO devices require little or no CPU, or your IO or network speeds are low then you will not need as much CPU to reach maximum potential. This will be a lower potential than the high speed, high CPU situation.

My 2c :)
 
I could have sworn I put 10-20 users in the OP :p

name='nrage' said:
It seems to me that for a file server the main bottleneck/performance concern is going to be hard disk speeds rather than cpu. Unless your IO devices use a significant amount of CPU (think older drives in PIO mode instead of UDMA). I'm not sure what sort of CPU usage a RAID array causes, nor am I sure what CPU usage a network attached storage device uses.

If we imagine a simple FTP server, for example. Typical architecture for that would be to have a main thread of execution "listening" for connections on a specific address and port. When a connection arrives a sub-thread is created (or re-used) and in this thread the connection is handled.

I imagine copying files using windows explorer has a similar architecture to what I described for the FTP server. The main process explorer.exe handles the copy requests but creates sub threads to handle the actual copy of data.

Therefore you could say that generally a multi-core CPU will have an advantage over a single core system simply because it can handle more connections simultaneously, devoting more CPU time to each.

That said, if the speed of the hard disks, or speed of the network connection are slow enough then these bottlenecks will prevent the cpu from ever reaching 100% usage, and in such a case the extra cores wont benefit you much.

So, if your IO devices require a lot of CPU to operate at full capacity, or your IO and network speeds and high, then you will want more CPU power in order to reach maximum potential.

If your IO devices require little or no CPU, or your IO or network speeds are low then you will not need as much CPU to reach maximum potential. This will be a lower potential than the high speed, high CPU situation.

My 2c :)

This is precisely where the thoughts I had were going. There are oportunities to either use a board with it`s own i/o processors, or rely on raid controllers of something like a pci variety, that can take harddrive access away from the cpu up to 100%.

I`d be very keen to see a polled log of such a server to see whether the cpu does that much work at all outside of maintenance.

The arguement also came to mind when a talk on virtualization started and the opening statement was akin to "out of all the servers in the world, the average utilization of the cpu was xx%" (can`t remember if it was 18/15/8%) but either way that`s pretty darn low. And with the majority of non disaster maintenance taking place outside of regular office hours, why the heck do u need something like a Xeon quad ??
 
Back
Top