Some misleading information has obviously been spread by nVidia but it has also already begun on this thread. Yes nVidia are guilty of reusing GPUs within new cards but there is a difference and it is a big one in most of them. When you say recycle you are saying this as a bad thing but it really isn't (it is highly annoying). nVidia has taken its best (and best-ish) GPUs and shrunk then this means cooler running and less power drain at worst. At best it means they bump up the speed and you get even more performance.
You may see this as a betrayal of your confidence (saying one thing and doing another) but its like christmas. You get a brilliant product (which none of you can deny) for less power consumption. The reuse of the technology is not the point of the exercise it is the extraction of better performance. When you understand your technology better you can gain the best from it, nVidia like you on a new program is still learning. When it creates these new bits they may work well but well isn't good enough. When nVidia brings out a new GPU its because everything has been extracted from the current setup. These lessons are then used and evolve on the next generation. Creating new GPUs every generation would only serve to drain funds and decrease performance over the generation.
For god's sake Intel are releasing their own GPU which is a bunch of MMX processors because they know them backwards and forwards so will produce a blistering setup straight away. Fair enough they will face some hills but I'm willing to bet they will easily be fighting for the title of "best card".
My annoyance is them using the wrong GPUs again. The GTX295 is a case in point. 2x GTX260 WTF take the second rate GPU setup and double it no use the GTX280. If they are using the GTX260 for a valid reason then it is fine but I want to know that reason (power efficiency? better scaling? higher bandwidth? etc).
Another annoyance of mine is ATi. They can build decent stuff (I know I bought their last good card - X800XT PE) but all I've seen of late is catch up tech. The latest generation is a respectful attempt by ATi to give nVidia a kick in the knackers but it doesn't to anyone who actually wants to see. 4870 (and 4870X2) are very impressive in current games but they are going flat out. nVidia's offerings hold their own but don't deliver a final blow not because they can't but because they haven't been taught to. With CUDA and Havok enabled tests you can see the gap (or rather the vast yawning chasm) between nVidia and ATi but they are TEST FFS. NO game utilises nVidia's CUDA system properly yet, even UT3 barely scratches it. The problem is programmers being lazy (I know, I'm one of em), to gain full optimisation they must build the software from ground up to be ready for the hardware. Massive development time and massive cost. Only then will nVidia's true efforts be seen by the masses.
Anyone done CFD? processing is usually done by supercomputers and takes several hours. nVidia can do it in realtime. ATi cannot.
Ray-tracing (that thing that makes Pixar movies so wonderful) takes several hours per frame on huge rendering farms. nVidia does it in realtime. ATi takes minutes.
The gap is clear but programmers won't let the masses see it yet.
GAHHHHHHHHHHHHH!
btw yes I'm extremely angry now