^ and also
programming does not work like that!!!
it is not down to the threads that are available, BUT, more down to the amount of IRQs pushed on the CPU's registry stack.
this is something that modern programmers fail to realise (bad teachings) - only those that have an understanding for assembly language programming could understand this.
...and also why modern apps crash and BSOD, due to memory bleeding and CPU bottle-necking.
IMHO, modern .NET languages should be banned. every programmer should be taught ANSI-C as standard (no OOP - just pure realtime coding)
![]()
lets hope that they are good performance, anything close to the sb chips would be good, as an 8 core cpu for a bit more than an i7 2600k would be amazing value![]()
![]()
hopefully the quad core chips will be cheap and good performance, as I may just have to upgrade my i3![]()
[I have the worst understanding of software on this whole forum for the most part]
Could they make code or whatever for it in the future though? Or not a chance? (Recognising two or more cores as one) if ways were found to tackle the error rates and power demand?
in a word.... a very short one..... NO
a program do not run differently on a machines with different of cores
IE:
32-bit code does not run faster on a 64bit machine... in fact, it actually wastes memory.
cores in coder terms are known as affinities (core0; core1; core2 ... coreN)
to write to all cores simutaneously requires OOP code (object orientation), or acces to a low-level language (purebasic can do it)
a part of a program (class) is assigned an affinity, but always reports to core0 :: core0 holds a class that encases all global variables
core0 will always wait until all other affinities (cores) have done their processing before triggering the said affinities for processing again.
so.... it does not matter how many cores you have... they will only work at the speed of the slowest core (normally core0, as it has to manage the others)
are you lost yet?
as for bugs and problems occuring - it is down to two things:
the coder
or
the compiler (m$ sucks big time .... borland used to rule)
So my understanding: you can't alter how many cores make up core0 because that would be like pulling the gate of it's hinges/ changing the size of a plughole and expecting the plumbing further down the pipeline to still cope - which it won't?
But with hyperthreading and adding cores, core0 still remains in the same place?
I think I'm on the starting block of understanding this but will need some considerable shouting at![]()
intel cleverly thought of utilising more core power by creating HyperThreading
most apps today are still 32bit, but most CPU are 64bit core(s).... meaning that a 64bit core could be set-up to run 2x 32-bit threads "virtually".
even thought affinity0 (core0) would have extra overheads by the workload of virtualising itself and the other cores, it would still gain work power from the said HyperThreading![]()
Just been reading somewhere else the fx-8150p is to be sold at 300$![]()
Eight native cores........Talk about value for money![]()
Or more of a worry. Doesn't that speak volumes?
If they're able to knock out a chip with 8 cores for $300 it's a concern to me. AMD don't give things away. Price to performance.. It's always price to performance. I'm sure it'll be $300 worth, but it ain't gonna be no Sandy beater at that price. If it was then trust me, they would want blood for it. I clearly remember AMD bashing out the original FX series which were rebadged Opterons. How much? about $650 much. And the reason? they laughed at anything Intel had going.
Intel's only response, due to their terrible CPUs at the time was to rebadge a Xeon with loads of cache to a "Pentium 4 extreme" and sell that... For $850.
If the 8 cored Bulldozer is really as fast as a 4 cored Sandy then it would be $600 or more. There's simply no way to manufacture 8 cores going that fast for that.
If I'm wrong of course then I have no problem dining on my headwear, but I just can't see it happening. Especially when AMD issued a statement around year ago that basically said they were no longer even going to try competing at the top end as they just couldn't afford to. On those rebadged Opterons they were taking losses, just to have the fastest CPU in the world. If you consider that at that time the equivalent Opteron was $1000+ and the only real difference was ECC support then yeah, they were making losses.
Usually a flagship model such as that will lose a company money. Asus do it often... Asus Mars, Asus Ares. The R&D that goes into them and the 'special' manufacturing of the unique components (coolers, shroud, PCB, even the schemtatics) coupled with the low sale amount (I mean they're a bit too exclusive for Joseph Public) usually sees them making a loss. But, they do it for the bragging rights and to attract people to their lesser cards. Kinda like, earns them a cool factor.
That's like buying a Skoda so they can compete with Ferrari....