The change in cpu sockets.

Rastalovich

New member
(put aside for 1 minute that I`m all for progression, steady progress)

Apart from requiring new mobo purchases across the board, is there any real advantage in Intel changing their socket for the next round ?

Now I know that they`ve basically taken a bunch of 775 cpus, slapped them together, and come out with a rectangular design all under one roof. But to what end ?

We`ve seen some screenies the last so many days of some pretty interesting, albeit innacruate as they are, bench/utils. Low fsb, 8 cores, probably lower voltages across the cpu range. But I see extreme cpus on these tests.

It`s wholely probably that the best chips are being used and tested, hence their unlockable x22 multi - or perhaps x22 is the new x8/x9 - who knows 100%.. and mighty shiney they look and scoring 2x the results of a 775 4 core variant - which of course is what they practically are, soldered together in a sense.

Now this is the thing. U know that the extreme cpus are one thing. And they`ll come out spanking the 775 extremes at the rate of x2, loosely, I would imagine. Then u have the regular desktop cpus. U`ll get the locked versions, perhaps 2x a Q9450 and so on. The bench for this being 2x is of course the 8 cores, which in the right bench-tester util will show 2x. I see 8 cores with a smaller l2 cache but a boost in l3 cache, for what it`s worth.

My question is, the desktop, or household/office, is 99.99% not going to even use dual core. Now they have 8 cores available, which maywell rise to 16 cores b4 too long. Not Intel`s fault, but of no use to this market.

The business/professional side of things, yes u can get some benefits in certain areas. 16 cores, and they want u to take all ur servers and put them in one machine, so they all fail at the same time, but it`ll save energy and space. (green)

Point being now. We barely use 775 to it`s capacity (disregarding the fsb), why are we needing a new socket other than for sales of mobos ? The folding community of the world probably use cpus to their maximum in the public arena, possibly followed and including enthusiast gamers. The cpus get faster and the games continue in the same recycled vein (very green of them also).

Pull out a 775 cpu from a gaming machine and put in the new socket cpu with mobo, the gamer will not notice. The folder will for sure. U can see SMP techs licking their chops in anticipation.

The gamer won`t notice, the household/office worker won`t notice. The benchmarker will until they break the thing, and business will notice. So why not keep it as a 771 replacement only ?

Purely a low power, low wattage pull ? The gamer doesn`t care, I`d argue the office/household might to a degree, and business will cos of billage - folders care about being green or bills ? They want WUs and will kill their grandmother to get them. (not forgetting of course there are other worthwhile causes similar to F@H out there - this is not where I mention SETI)

Whilst all these extreme 8/16 core cpus are wowing everyone, so many months down the line u will get the cutdown versions for cheap and general use. U know for sure that these cpus will perform less than the mainstream 775, cheap, cpus and u now have stepped backwards.

U buy ur pcworld pc, and it will boast the new socket but have a lower than 775 cpu with more cores - that u won`t use.

Meh ?
 
Addressing the 775 -> new socket for this generation at least. They need the extra pins. It's got an IMC.
 
Your right rast, and yes i'm gonna harp on about it again...but we dont need a new socket, we can't program for the existing quad cores correctly anyway...more cores does not always mean greater power...only unless the software is optimized correctly do you yield the positive results that the manufacturers claim...with software development falling further and further behind were being left with hardware thats beyond programmable...good examples this of late are microsofts bid to optimize multicore applications...and NVidia trying to bypass the problem with CUDA...not until machines can code themselves (birth of AI) are we gonna see the software side of thing catch up...one thing we have to remember is new improved designs for most hardware can be done in the fraction of time as they are in all escenses designing themselves with very little input from VLSI engineers....who are basically there to say yes or no to whether the current design the software thats designed the circuitry has come up with is marketable ... and whether production begins... do you think that any human on the planet can track and place 1 Billion transistors...mmmm...I can do about 100 ... 1 Billion...thats alot of voltage drops to measure...

but I agree rasta, lets try and get our heads round making the most of what we've got before just throwing cores and new sockets at a problem that isn't hardware related....
 
I'm with Rast and Jamster on this one.... However I am not going to throw the blame on the the chip manufacturers for stepping it up once again. They are doing their job, making faster and more powerful chips. That's what they're supposed to do! Let's on the other hand harp and rag on about the software developers. Multiple core technology has been on the scene for over a decade now, and dual cores have been on the scene for what, 3+ years now? IMO at this stage in software development everyone should be programming for the use of multi-core technology, for everything.... It should be an industry standard to code for multi-core technology regardless of what type of software it is (AIM, the next Battlefield game, Photoshop, or MSN Messenger.... EVERYTHING!)

IMO, it's the software companies that need to step things up and get with the program. The technology is out there, let's make use of it shall we??!?!?!
 
name='Ham' said:
Addressing the 775 -> new socket for this generation at least. They need the extra pins. It's got an IMC.

Could they massage the *reserved* and not required pins in the 775 to do this ? I don`t know tbh.

As for Intel pushing boundries. There could be another issue with this - in the sense of software and hardware people branching off in different directions.

What I mean here is that in 2012 when Intel announce some 24 core cpu - what would be the point if programmers, generally, aren`t going to use them ?

More prudent here perhaps would be to scrap the idea of cores as they exist today, and present a single or dual core to the PC - but inside the cpu it works on the instructions it`s given in the form of the theory of what a multicored pc should do.

e.g. programs continue to be written the way that they are. Offer the pc the looks of a dual core, for OS and program use, but behind the scenes, with no configuration necessary, the cpu does it`s instructions on a multicore basis.

I can`t guarantee u that in even 5 years time programs will use cores effectively. I mean any1 here in university now having lectures of core theory ? I bet they arent. (that`s another arguement in itself, I don`t personally feel that those without the aptitude to program should be even taking courses - and as a result the courses have to cater for the whole public, with varying aptitudes - and they want passing scores for their records)

Why aren`t cores being discussed in degree courses at university level ? Well for a start u`d have to retrain the lecturers. I bet they skim over it in subjects like electronics. Any1 know any1 who know`s of any1 that does electronics or even physics as a degree ?

Oops rant >.<
 
Yeah the only problem with the hardware developers is leaving it on the standard IBM compatible, Archaic design, which needs to be addressed. But its not there problem, there just doing the job they get paid for. The big problem is the software design. The major issue is that Software Engineers get taught to use existing routines to speed the design process up. TBH in what Ive seen in software engineers recently less then 1% even know what an assembler routine looks like...everyone is just using rehashed C++ routines that were created 15 years ago...its a shame...

And what you're talking about rasta in terms of letting the hardware optimise the code as it goes along needs strict rules to adhere to, it would have to work as some kind of advanced instruction compiler, and I dont think this is gonna happen with the IBM standard, the way it would work more leans towards how games consoles operate, which lets face it the programers on games consoles push the hardware to its limits over years of dedicated programming on the same instruction set, which leads us back to basically levelling existing PC architecture and starting afresh, which I think is a great Idea, plus theres the arguement of core design, Intels Quad Core design, even though proved to be faster, I feel is not as well thought out as AMD's model, I could harp on about 2xCore Duo's and that but |I think most people on here will understand my point. Things need to change or were gonna end up with equivelant of a modern pc but only the software to play a very basic pong style game :D
 
But what is the point in Intel doing what there doing if they have a world-breaking cored cpu that no1 can use and better than an AMD K7.

I`d throw that gauntlet down not necessarily at cpu manufs, but at the industry in general.

I have no doubt u could use an existing cpu in a mobo that had no legacy to the things u mention. From there it would be for the software to adapt.

Mobos won`t be created like this why ?

U see all the instruction sets available to a cpu in the cpuz section, assign a core to each of those. There`s a simple way around it. It`s a botch, but it`d probably work and present a way forward.
 
and just to add....lets get back to an all in one system....theres your computer...its powerful as anything on the market...you cant upgrade it for 5 years...so you have to make do with it and learn how to harness it all warts and all....Amiga Style....things would have to change....maybe the consoles are unfortunately leading the way....how long before we see a complete Entertainment Machine that boasts the benefits of a PC and console with no compromise on each...
 
In the sense of the Amiga architecture, there was an advantage in the fashion of using seperate processors for different things. (a multi core spread out on a pcb if u like) with a 2m cache through dma. And a main cpu for "stuff", and an extended memory.

Now if u could clock these processors to todays speeds, u`d be multitasking all over place, as standard. What`s more the 68k/604e+ would allow for that programming as standard also. 8086 is more linier,
 
I completely agree, weve been discussing this alot lately, I think a step backwards is needed to push further forward, I think existing architecture is hitting its ceiling as the only purpose that cieling is raised is to allow sloppy programming.
 
I read all the way to Frags post. Heres the deal i agree with you all BUT you have to realize that when dual core came out we wernt using them. Now, most everybody has quads and were just now using dual core. If they come out with 8 cores then maybe quads will start to get used and like quad core now, octo core will be the epeen thing to do. Since im always behind a generation in PCs its low cost this way and im actualy using my hardware to its full extent rather than having epeen out the wazoo. Thats right, the wazoo. And not only that, it forces me to keep up on PCs and programming to get the most out of my PC.

What im saying is its a good thing they are coming out with a new socket and new tech becasue it will in turn lower prices for everybody and put the new tech on the scene for programmers to at least start thinking about programming for it. I mean did anybody complain about 939? Not really, it was a monster :D (still is)

Oh and F@H is kinda pointless on CPU if you have an 8, 9, or 200 GPU. :) (Or ATI folder)
 
...I dont think personally it will drive prices down below what were already seeing £20 for a 775 mobo...thats cheap .... £35 for a dual core....what will happen is that the current high end will go end of line....for a while the high end may drop in price so the distis can get rid of them (check out the 8800gtx prices at the mo and compare them to 4 months ago)...but with current trends of hardware development crossed with software, software is falling further and further behind, like you said PP were only just optimizing for duals and quads are cheap enough, by the time we have good knowledge of programming for duals we will probably be on 36 core cpu's. Alright that may be a tad of an exageration... I think Rasta's approach of letting the cpu manipulate the software and optimise the code itself is the ideal way forward as it allows for a straightforward linear programming method which will be optimized for all multicore cpu's....which is easier for us all to understand...it may mean cheaper but short lived prices...and alright if your interested in good power to cost ratio then for a time its right...but the industry self regulates itself in terms of price and production...so its constantly the same cycle....and theres always something better for the same money...
 
imo its mostly a ploy to sell more in terms of chipsets etc..

but im a cynical git

at least gpus are starting to make something more revolutionary - compared
 
I can see why Intel had to re-arrange the socket for Nehalem, with an on-die memory controller the connections will be totally different, think Athlon XP to 754 (A64). Of course the extra interconnects for 939 were for PCI-E. Perhaps we'll see some more changes to standards? Always something to look forward to ey?

775 has been around for a while so it's not like Intel haven't extended it's life. Plus you'll deffo nee a new mobo for Nehalem anyway so a new socket makes very little difference

It would be nice to see someone revioolutionise the computing world and move away from X86, but I can't see that happening

As PPm said: we'll be using quad core soon (maybe) so why not step one up and go octo-core :p
 
they needed the extra pin for the ddr 2 memory controller....and 754 couldnt handle dual channel .... I understand the need to change socket types...its just I haven't seen enough justification for intels current move...wheras I have seen justification in AMD's architecture....even though AMD have been under performing there ideals are sound...introducing a new chip design thats backwards compatible...there whole design of quad (and TRI...WHY WHY WHY) are deservingly good ideas, albeit with bad materialisation....
 
There is another way of looking at this. Imagine Intel came out with a new type of pin connector. We had mfm-edge connectors for the cartridge types, bendy pins for the sockets, straight pins with zif etc. Now they invent something that`s 10000 connections, with over 9 times the redundancy of pins, and all their cpus for the forseeable future will be using it.

Mobo manufs complain ?
 
Back
Top