More Nvidia GPU Recycling

I just thinks its quite sad that they do it, AND get away with it, surely people do some research before they buy!?

But then if they did, then Nvidia wouldnt be getting away with it
 
This is where empathy for the millions of people who shop at PC World, spend hours on PC helplines and hand over stupid amounts of money to people to fix basic problems is needed.

Yes, we all know the difference (or to be more accurate, lack of) between all these products, but theres nothing to stop joe public from walking into any highstreet computer store and thinking a 9800GT or GTS150 is an upgrade from the 8800GT he got in his 2 year old Dell.

It's exploitation imo, nVidia know full well that modern computers are complex pieces of kit and theres plenty of money to be made from the poor buyers who dont know EXACTLY what their dropping their cash on. Hell, you only have to spend a few minutes in PC World to hear the staff feeding absolute bullcrap to the uneducated.... They'll have no problem claiming a GTS150 is "superior" GPU.

"Yep, this 7300GT has 512MB RAM so it'll play any game out today fine"

:mad:
 
I'm wondering if this can even be legal? Surely it could be classed as misleading to some degree - as D-Cyph3r mentioned -> thats going to be happening, people will realise they have had old tech sold to them rebranded as new and will be ****ed.

Wasnt so long ago that nvidia had a large wave of chip recalls.. So what.. now they are trying to flog the chips people avoided, or an I just being cynical here.
 
Some misleading information has obviously been spread by nVidia but it has also already begun on this thread. Yes nVidia are guilty of reusing GPUs within new cards but there is a difference and it is a big one in most of them. When you say recycle you are saying this as a bad thing but it really isn't (it is highly annoying). nVidia has taken its best (and best-ish) GPUs and shrunk then this means cooler running and less power drain at worst. At best it means they bump up the speed and you get even more performance.

You may see this as a betrayal of your confidence (saying one thing and doing another) but its like christmas. You get a brilliant product (which none of you can deny) for less power consumption. The reuse of the technology is not the point of the exercise it is the extraction of better performance. When you understand your technology better you can gain the best from it, nVidia like you on a new program is still learning. When it creates these new bits they may work well but well isn't good enough. When nVidia brings out a new GPU its because everything has been extracted from the current setup. These lessons are then used and evolve on the next generation. Creating new GPUs every generation would only serve to drain funds and decrease performance over the generation.

For god's sake Intel are releasing their own GPU which is a bunch of MMX processors because they know them backwards and forwards so will produce a blistering setup straight away. Fair enough they will face some hills but I'm willing to bet they will easily be fighting for the title of "best card".

My annoyance is them using the wrong GPUs again. The GTX295 is a case in point. 2x GTX260 WTF take the second rate GPU setup and double it no use the GTX280. If they are using the GTX260 for a valid reason then it is fine but I want to know that reason (power efficiency? better scaling? higher bandwidth? etc).

Another annoyance of mine is ATi. They can build decent stuff (I know I bought their last good card - X800XT PE) but all I've seen of late is catch up tech. The latest generation is a respectful attempt by ATi to give nVidia a kick in the knackers but it doesn't to anyone who actually wants to see. 4870 (and 4870X2) are very impressive in current games but they are going flat out. nVidia's offerings hold their own but don't deliver a final blow not because they can't but because they haven't been taught to. With CUDA and Havok enabled tests you can see the gap (or rather the vast yawning chasm) between nVidia and ATi but they are TEST FFS. NO game utilises nVidia's CUDA system properly yet, even UT3 barely scratches it. The problem is programmers being lazy (I know, I'm one of em), to gain full optimisation they must build the software from ground up to be ready for the hardware. Massive development time and massive cost. Only then will nVidia's true efforts be seen by the masses.

Anyone done CFD? processing is usually done by supercomputers and takes several hours. nVidia can do it in realtime. ATi cannot.

Ray-tracing (that thing that makes Pixar movies so wonderful) takes several hours per frame on huge rendering farms. nVidia does it in realtime. ATi takes minutes.

The gap is clear but programmers won't let the masses see it yet.

GAHHHHHHHHHHHHH!

btw yes I'm extremely angry now
 
I`m gonna get jumped on, but I think the new naming of the cards is a good thing moving forward.

This is ofc taking into consideration that the GT2xx naming are the up to date cards, and the GT1xx can be compared accordingly.

In terms of having a 9800 and buying a GT1xx as an upgrade - sure u can catch urself out on the face of it, but u shouldn`t buy something of such an outlay without atleast looking into them.

With the numbers already hitting the 9xxxx, this was bound to happen, as it will when GT9xx come out, whenever they do.

It`s also plain to see that the older x600GT cards are now GTx60, and the x800GTXS cards are now GTx80.
 
It is Rasto but I think the qwibble that is raised is the fact that people on forums can created brand new technology better than nVidia and so get irrate when they reuse GPUs to show power utilisation has increased (efficiency+) and output has increased.

Oh wait not they cant they just moan
 
I agree with you almost about every thing, i work on cuda and see the amazing power and potential there. Still when it comes to pure graphics ATI is certainly giving Nvidia a run for its money.
 
name='rrjwilson' said:
Some misleading information has obviously been spread by nVidia but it has also already begun on this thread. Yes nVidia are guilty of reusing GPUs within new cards but there is a difference and it is a big one in most of them. When you say recycle you are saying this as a bad thing but it really isn't (it is highly annoying). nVidia has taken its best (and best-ish) GPUs and shrunk then this means cooler running and less power drain at worst. At best it means they bump up the speed and you get even more performance.

You may see this as a betrayal of your confidence (saying one thing and doing another) but its like christmas. You get a brilliant product (which none of you can deny) for less power consumption. The reuse of the technology is not the point of the exercise it is the extraction of better performance. When you understand your technology better you can gain the best from it, nVidia like you on a new program is still learning. When it creates these new bits they may work well but well isn't good enough. When nVidia brings out a new GPU its because everything has been extracted from the current setup. These lessons are then used and evolve on the next generation. Creating new GPUs every generation would only serve to drain funds and decrease performance over the generation.

For god's sake Intel are releasing their own GPU which is a bunch of MMX processors because they know them backwards and forwards so will produce a blistering setup straight away. Fair enough they will face some hills but I'm willing to bet they will easily be fighting for the title of "best card".

My annoyance is them using the wrong GPUs again. The GTX295 is a case in point. 2x GTX260 WTF take the second rate GPU setup and double it no use the GTX280. If they are using the GTX260 for a valid reason then it is fine but I want to know that reason (power efficiency? better scaling? higher bandwidth? etc).

Another annoyance of mine is ATi. They can build decent stuff (I know I bought their last good card - X800XT PE) but all I've seen of late is catch up tech. The latest generation is a respectful attempt by ATi to give nVidia a kick in the knackers but it doesn't to anyone who actually wants to see. 4870 (and 4870X2) are very impressive in current games but they are going flat out. nVidia's offerings hold their own but don't deliver a final blow not because they can't but because they haven't been taught to. With CUDA and Havok enabled tests you can see the gap (or rather the vast yawning chasm) between nVidia and ATi but they are TEST FFS. NO game utilises nVidia's CUDA system properly yet, even UT3 barely scratches it. The problem is programmers being lazy (I know, I'm one of em), to gain full optimisation they must build the software from ground up to be ready for the hardware. Massive development time and massive cost. Only then will nVidia's true efforts be seen by the masses.

Anyone done CFD? processing is usually done by supercomputers and takes several hours. nVidia can do it in realtime. ATi cannot.

Ray-tracing (that thing that makes Pixar movies so wonderful) takes several hours per frame on huge rendering farms. nVidia does it in realtime. ATi takes minutes.

The gap is clear but programmers won't let the masses see it yet.

GAHHHHHHHHHHHHH!

btw yes I'm extremely angry now

wow, big explanation, but pretty cool, i learned some with this post :D

but still for today games i would still go for ati, since like you said today games dont accept CUDA in a efficient way, and it will take some time (some good time) to change this situation.

Soap.
 
Back
Top