I write software for a living so I guess I can chime in. One of the problems with multithreading a single task is you often times are waiting for the output from a previous part of your code before the next part runs. Like if you're doing a very large series of mathematical calculations where the next calculation depends on the result of the previous one.
What you'll often find with programs that are multithreaded is they break down what they're doing in to smaller pieces that can be executed on their own, self contained, not relying on the data being produced by other threads.
So for example in a game where you have entities, models, sounds, AI, networking, user interface, control input listening, events triggering. Theres a lot of stuff going on there where you can break these out of one thread in to multiple threads. Have one thread dedicated to AI, have one dedicated to networking, have one dedicated to physics calculations. Then you'll have a listening thread / master thread which dishes out the jobs. Like the conductor on an orchestra.
Now obviously this method of threading has a lot of problems with scaling because there are only so many different parts of a game that you can compartmentalise in this way and some jobs are simply not intensive enough to deserve their own thread. The future really is extremely optimised game engines.
The kind of engines where it's no longer textured objects in the game. It's a game engine where everything is a voxel. A volume element representing an object in 3 dimensional space. To put this another way, imagine your display with all its millions of pixels now imagine those pixels in 3D and dedicating a thread of your CPU to manage every single individual pixel represented there.
That is the only way really for the future of multithreading in games. If you treat every single pixel in the game as its own independent object or "voxel" that you can have a dedicated thread manage in every way, if sound is present in that pixel, its physics interactions, its movement trajectory, its shading.
Now there are some games today that use Voxels already like Minecraft (all its blocks are technically Voxels) but I'm talking on a much larger (well smaller) scale obviously where the individual pixels are voxels instead of large textured blocks.
I think we are a long way away from getting any workable game engine of this magnitude. We'll probably get real time raytracing before we get this.
But my point is, for gaming, these kind of high thread counts are not going to be utilised any time soon. And don't even think about buying this to be future proofing because by the time we have game engines that utilise this many cores there will already be consumer grade CPU's with this many or more cores for 1/20th the price making the investment today a waste of money.
For high core amounts like this the best usage is server and workstation software. Rendering obviously like was said in the review. Protein folding, Seti @ Home, scientific calculations, liquid simulations etc
By the way just incase anyone is interested how NVIDIA gets PhysX cloth and water physics simulations to work across their GPU's (which have thousands of CUDA cores now days) they actually just use the voxel method I described above. They use essentially voxels which are tethered together to create (in the case of cloth) a sheet and then each voxel has a certain amount of CUDA cores dedicated to calculating its movement.
Each voxel works completely independently from each other but because they are tethered together (always telling their neighbors their coordinates in space) they can add their neighbour voxels position to their own movement calculation creating collisions and that is what makes the cloth ripple out to the edges when a strong breeze hits the centre.
I hope this was somewhat informative and not too boring