Xeon Phi is just an Add-in card similar to NVIDIA's tesla range. It is a co-processor and it uses very basic x86 cores. Not the same kind you'll find in a Core i7 or a desktop/server orientated XEON.
The chip being used is basically what used to be referred to as a General-purpose computing on graphics processing units or GPGPU. This was the term used when CUDA, STREAM, OpenCL and Direct Compute first arrived on the scene but is being less used now.
So in short what is this thing good for? It's good for hugely parallel floating point operations. Data crunching on a massive scale. This card is not the sort of thing you stick in your PC (or server) and then your programs just see an extra 50 x86 cores. Your applications have to be built specifically for this card or be optimized to use OpenCL or Direct Compute so that it can use standard libraries in conjunction with Intels drivers. Exactly how the very popular CUDA works currently.
The benefit to Intels direction with using x86 cores is compatibility but don't be fooled it isn't the same x86 as on our desktops and it lacks complex instructions that have become standard over the years. It's just the bare minimum.
I think actually this chip is simply larrabee with Error-correcting code (ECC) included.
Now that I've deflated some of your balloons I'm going to fire one more bullet. The new NVIDIA K20 Kepler based Tesla card will feature 4 Teraflops of Single-Precision Floating-Point and over 1 Teraflop of Double-Precision Floating-Point. And the K20 is also ECC enabled. That is this Xeon Phi's real competition and I think Intel may actually find it difficult to break in to this market considering CUDA's huge lead.