AngryGoldfish
Old N Gold
Don't make me lol. Not today.
Really they might be. When you're the one writing the rules, you can make them as ridiculous as imaginable and you'll still stand by them.
Don't make me lol. Not today.
NVidia isn't forcing anyone to do anything, if you make the choice to buy a high end GPU but don't want to pay the premium for RT then buy a 1000 series card, they are basically what you're asking for, given outside of the addition of Tensor & RT units the shaders themselves haven't really changed meaningfully between Pascal/Volta/Turing.
People will remember this when AMD & Intel roll out their MI/RT compatible GPUs with their next generation as leading the vanguard & laying the foundations for all GPU vendors & game companies when it comes to the implementation of neural network accelerated graphics pipelines and raytraced lighting. A mature but light RT implementation should in theory perform better than a pure shader based one as developers find better ways to balance the workloads concurrently & make use of the whole die simultaneously without one portion bottlenecking the other(Expect this next gen).
The request for high end GPUs that tackle performance issues by just throwing more of the same cores/power at the issue might seem desirable in the short term, but the industry, like every other form of processor ever, can't continue on that path, and at some point someone had to bite the bullet and start the shift towards smarter, more efficient, more accurate pipelines(Even it means a lack of speed up in most traditional/legacy code(though not really an issue for a high end GPU today which can generally max out most games), with the fruits of the labour not realised for months to years). Whichever company had stepped out to make that move first would have encountered exactly the same chicken and egg problem that Turing is attempting to deal with.
If it weren't for changes/extensions to x86 for the addition of all sorts of new units & instructions to the architectures that didn't offer any benefit to legacy code and required recompiles/rewrites to gain benefits then the CPU industry would have ground to a halt in the early 90's.
When the 20XX series of cards were designed I am pretty sure selling old 10XX series cards was never going to be part of the plan.
NVidia have made a total mess of introducing these new features as they have tried to do it about 2 GPU generations too early hence the reason for huge dies.
My point stands
NVidia are forcing people to buy tech they either don't want or don't need.
You can't introduce features like this in a clean way, introducing a new feature takes resources & sacrifices, they knew this backlash would happen, everyone in the industry did, because people want the best of both worlds and don't see the whole picture. These people would still complain about the lack of innovation if GPU progress reached CPU levels of progress (Likely what would happen if we kept chasing rasterisation hacks or relying on silicon improvements for performance improvements, rather than pursuing the inherently more accurate & natural technique of RT & the various forms of corner cutting MI has & will continue to introduce), so there really is no pleasing some. The fact is that you can only develop a concept so far on paper, until someone made true RTRT hardware that was at least viable for developing, testing & early gaming it was never going to happen. This is exactly the same scenario we saw with G80/the introductions of programmable shaders almost exactly a decade ago.
If NVidia & AMD were set on having 7nm's launch as the true launch for DXR/MI technologies (Seems to be the case from launch windows of either & the fact die shrinks are traditionally always used for making an architecture wider, and MS said both were developed with input from both companies), then it would have completely killed the technology if both had committed to raw implementations in silicon without any part of the API stack/game engines/ect being tested on it(Imagine Turing's launch but on a far bigger scale). Turing is the blood sacrifice in the same vein of every other expensive, enthusiast-only priced, luxury item that has charged a premium for being the first to market with a forward thinking feature while really existing to knock down technological barriers & exist as a dev tool without bankrupting the company(Turing is likely the most expensive GPU architecture in history when it comes to R&D). People ALWAYS complain about these pieces of technology, but with a capitalist society they have to exist to fund progress without risk of bankrupting the company taking the risk. The idea anyone is forced to do anything when we're talking about luxury items 95% of people would considered themselved priced out of is ludicrous, the fact is Turing only targets a tiny, tiny portion of the enthusiast/luxury gaming market given its pricing structure. If you're looking to drop a grand on a GPU then you may feel like you're "forced" to buy a Turing card, but yano, you could just join everyone else and either wait or pay less. If Turing didn't exist, you'd have been waiting for 7nm till you got new hardware, would that really have made you happier?
Turing isn't meant to be a killer seller, in fact if Turing was priced too competitively and sold too well too early, particularly amongst non-enthusiast circles it could have killed RTRT's perception off the bat(Again, like actual Turing's launch, but so so much worse). NVidia has priced the mainstream out of Turing, and while yes, this likely won't need to happen with 7nm, that's not justification alone for waiting for 7nm to launch this technology.
You can't introduce features like this in a clean way, introducing a new feature takes resources & sacrifices, they knew this backlash would happen, everyone in the industry did, because people want the best of both worlds and don't see the whole picture. These people would still complain about the lack of innovation if GPU progress reached CPU levels of progress (Likely what would happen if we kept chasing rasterisation hacks or relying on silicon improvements for performance improvements, rather than pursuing the inherently more accurate & natural technique of RT & the various forms of corner cutting MI has & will continue to introduce), so there really is no pleasing some. The fact is that you can only develop a concept so far on paper, until someone made true RTRT hardware that was at least viable for developing, testing & early gaming it was never going to happen. This is exactly the same scenario we saw with G80/the introductions of programmable shaders almost exactly a decade ago.
If NVidia & AMD were set on having 7nm's launch as the true launch for DXR/MI technologies (Seems to be the case from launch windows of either & the fact die shrinks are traditionally always used for making an architecture wider, and MS said both were developed with input from both companies), then it would have completely killed the technology if both had committed to raw implementations in silicon without any part of the API stack/game engines/ect being tested on it(Imagine Turing's launch but on a far bigger scale). Turing is the blood sacrifice in the same vein of every other expensive, enthusiast-only priced, luxury item that has charged a premium for being the first to market with a forward thinking feature while really existing to knock down technological barriers & exist as a dev tool without bankrupting the company(Turing is likely the most expensive GPU architecture in history when it comes to R&D). People ALWAYS complain about these pieces of technology, but with a capitalist society they have to exist to fund progress without risk of bankrupting the company taking the risk. The idea anyone is forced to do anything when we're talking about luxury items 95% of people would considered themselved priced out of is ludicrous, the fact is Turing only targets a tiny, tiny portion of the enthusiast/luxury gaming market given its pricing structure. If you're looking to drop a grand on a GPU then you may feel like you're "forced" to buy a Turing card, but yano, you could just join everyone else and either wait or pay less. If Turing didn't exist, you'd have been waiting for 7nm till you got new hardware, would that really have made you happier?
Turing isn't meant to be a killer seller, in fact if Turing was priced too competitively and sold too well too early, particularly amongst non-enthusiast circles it could have killed RTRT's perception off the bat(Again, like actual Turing's launch, but so so much worse). NVidia has priced the mainstream out of Turing, and while yes, this likely won't need to happen with 7nm, that's not justification alone for waiting for 7nm to launch this technology.
You are totally missing the obvious -
Turing is a flawed and broken product.
Ray Tracing should never have seen the light of day on Turing as it makes the chips way too expensive. If NVidia had waited another couple of generations and then introduced RT on a smaller node like 7nm the cards would have been much cheaper faster and the take up would have been much better, this would have given the tech a much better chance of getting established quickly. If NVidia had done this there would not have been all the negative comments about price or how slow RT is when on in the latest games. It would have also allowed NVidia to include RTX all the way down the product line into the mid range cards ensuring better market presentation.
NVidia have screwed up really bad with Turing and for once they find themselves out of touch with the buying public.
You are totally missing the obvious -
Turing is a flawed and broken product.
Ray Tracing should never have seen the light of day on Turing as it makes the chips way too expensive. If NVidia had waited another couple of generations and then introduced RT on a smaller node like 7nm the cards would have been much cheaper faster and the take up would have been much better, this would have given the tech a much better chance of getting established quickly. If NVidia had done this there would not have been all the negative comments about price or how slow RT is when on in the latest games. It would have also allowed NVidia to include RTX all the way down the product line into the mid range cards ensuring better market presentation.
NVidia have screwed up really bad with Turing and for once they find themselves out of touch with the buying public.
Don't forget I am speaking as someone who actually owns some Turing cards so I am not biased against NVidia.
Here are my 4 Turing cards.
![]()
![]()
Nope, that's just not how hardware rollouts work.
No they don't !!!
Manufacturers want to appeal to as large a customer base as possible from day 1, failure to do this can lead to the technology fading into history very quickly.
Also as a game dev why should they waste resources on a tech that only exists on a few cards and the rest of the gaming public laugh at for low fps.
If this tech had been done at a future date in maybe 5 years time it could have launched on everything from the mid range cards upwards at prices that were more reasonable and good for generating a large customer base.
If AMD had some high end cards at 1080 Ti performance level or better on the market NVidia would never never never have dared launch Turing with RT because 99% of people would have voted with their wallets.
No they don't !!!
Manufacturers want to appeal to as large a customer base as possible from day 1, failure to do this can lead to the technology fading into history very quickly.
Many individual devs are working on DXR support, they're just mostly in games coming out over the coming months, but really all it takes for eventual wide spread support to be reached in todays world would be for game engine developers to support it, and the fact it will be a standard feature in the widely used Unreal Engine and Unity game engines(As well as Frostbite which is now used for anything from sports games to racers beyond its BF origins), as well as tools like solidworks & autodesk amongst many more, more or less solidifies it's position as a widespread technology already. NVidia has certainly jumped the gun a little by going as wide as they have with this release from a purely economic standpoint, but only by about 10 months at the most given 7nm timescales, and all that extra support & testing over an exclusively prosumer release could pay dividends when it comes to getting their 7nm hardware out.Also as a game dev why should they waste resources on a tech that only exists on a few cards and the rest of the gaming public laugh at for low fps.