Nvidia is reportedly working on a GTX 1660 Ti/GTX 1160

NVidia isn't forcing anyone to do anything, if you make the choice to buy a high end GPU but don't want to pay the premium for RT then buy a 1000 series card, they are basically what you're asking for, given outside of the addition of Tensor & RT units the shaders themselves haven't really changed meaningfully between Pascal/Volta/Turing.

People will remember this when AMD & Intel roll out their MI/RT compatible GPUs with their next generation as leading the vanguard & laying the foundations for all GPU vendors & game companies when it comes to the implementation of neural network accelerated graphics pipelines and raytraced lighting. A mature but light RT implementation should in theory perform better than a pure shader based one as developers find better ways to balance the workloads concurrently & make use of the whole die simultaneously without one portion bottlenecking the other(Expect this next gen).

The request for high end GPUs that tackle performance issues by just throwing more of the same cores/power at the issue might seem desirable in the short term, but the industry, like every other form of processor ever, can't continue on that path, and at some point someone had to bite the bullet and start the shift towards smarter, more efficient, more accurate pipelines(Even it means a lack of speed up in most traditional/legacy code(though not really an issue for a high end GPU today which can generally max out most games), with the fruits of the labour not realised for months to years). Whichever company had stepped out to make that move first would have encountered exactly the same chicken and egg problem that Turing is attempting to deal with.

If it weren't for changes/extensions to x86 for the addition of all sorts of new units & instructions to the architectures that didn't offer any benefit to legacy code and required recompiles/rewrites to gain benefits then the CPU industry would have ground to a halt in the early 90's.


When the 20XX series of cards were designed I am pretty sure selling old 10XX series cards was never going to be part of the plan.

NVidia have made a total mess of introducing these new features as they have tried to do it about 2 GPU generations too early hence the reason for huge dies.

My point stands

NVidia are forcing people to buy tech they either don't want or don't need.
 
When the 20XX series of cards were designed I am pretty sure selling old 10XX series cards was never going to be part of the plan.

NVidia have made a total mess of introducing these new features as they have tried to do it about 2 GPU generations too early hence the reason for huge dies.

My point stands

NVidia are forcing people to buy tech they either don't want or don't need.

Got to agree with your outlook on that one
 
You can't introduce features like this in a clean way, introducing a new feature takes resources & sacrifices, they knew this backlash would happen, everyone in the industry did, because people want the best of both worlds and don't see the whole picture. These people would still complain about the lack of innovation if GPU progress reached CPU levels of progress (Likely what would happen if we kept chasing rasterisation hacks or relying on silicon improvements for performance improvements, rather than pursuing the inherently more accurate & natural technique of RT & the various forms of corner cutting MI has & will continue to introduce), so there really is no pleasing some. The fact is that you can only develop a concept so far on paper, until someone made true RTRT hardware that was at least viable for developing, testing & early gaming it was never going to happen. This is exactly the same scenario we saw with G80/the introductions of programmable shaders almost exactly a decade ago.

If NVidia & AMD were set on having 7nm's launch as the true launch for DXR/MI technologies (Seems to be the case from launch windows of either & the fact die shrinks are traditionally always used for making an architecture wider, and MS said both were developed with input from both companies), then it would have completely killed the technology if both had committed to raw implementations in silicon without any part of the API stack/game engines/ect being tested on it(Imagine Turing's launch but on a far bigger scale). Turing is the blood sacrifice in the same vein of every other expensive, enthusiast-only priced, luxury item that has charged a premium for being the first to market with a forward thinking feature while really existing to knock down technological barriers & exist as a dev tool without bankrupting the company(Turing is likely the most expensive GPU architecture in history when it comes to R&D). People ALWAYS complain about these pieces of technology, but with a capitalist society they have to exist to fund progress without risk of bankrupting the company taking the risk. The idea anyone is forced to do anything when we're talking about luxury items 95% of people would considered themselved priced out of is ludicrous, the fact is Turing only targets a tiny, tiny portion of the enthusiast/luxury gaming market given its pricing structure. If you're looking to drop a grand on a GPU then you may feel like you're "forced" to buy a Turing card, but yano, you could just join everyone else and either wait or pay less. If Turing didn't exist, you'd have been waiting for 7nm till you got new hardware, would that really have made you happier?

Turing isn't meant to be a killer seller, in fact if Turing was priced too competitively and sold too well too early, particularly amongst non-enthusiast circles it could have killed RTRT's perception off the bat(Again, like actual Turing's launch, but so so much worse). NVidia has priced the mainstream out of Turing, and while yes, this likely won't need to happen with 7nm, that's not justification alone for waiting for 7nm to launch this technology.
 
Last edited:
You can't introduce features like this in a clean way, introducing a new feature takes resources & sacrifices, they knew this backlash would happen, everyone in the industry did, because people want the best of both worlds and don't see the whole picture. These people would still complain about the lack of innovation if GPU progress reached CPU levels of progress (Likely what would happen if we kept chasing rasterisation hacks or relying on silicon improvements for performance improvements, rather than pursuing the inherently more accurate & natural technique of RT & the various forms of corner cutting MI has & will continue to introduce), so there really is no pleasing some. The fact is that you can only develop a concept so far on paper, until someone made true RTRT hardware that was at least viable for developing, testing & early gaming it was never going to happen. This is exactly the same scenario we saw with G80/the introductions of programmable shaders almost exactly a decade ago.

If NVidia & AMD were set on having 7nm's launch as the true launch for DXR/MI technologies (Seems to be the case from launch windows of either & the fact die shrinks are traditionally always used for making an architecture wider, and MS said both were developed with input from both companies), then it would have completely killed the technology if both had committed to raw implementations in silicon without any part of the API stack/game engines/ect being tested on it(Imagine Turing's launch but on a far bigger scale). Turing is the blood sacrifice in the same vein of every other expensive, enthusiast-only priced, luxury item that has charged a premium for being the first to market with a forward thinking feature while really existing to knock down technological barriers & exist as a dev tool without bankrupting the company(Turing is likely the most expensive GPU architecture in history when it comes to R&D). People ALWAYS complain about these pieces of technology, but with a capitalist society they have to exist to fund progress without risk of bankrupting the company taking the risk. The idea anyone is forced to do anything when we're talking about luxury items 95% of people would considered themselved priced out of is ludicrous, the fact is Turing only targets a tiny, tiny portion of the enthusiast/luxury gaming market given its pricing structure. If you're looking to drop a grand on a GPU then you may feel like you're "forced" to buy a Turing card, but yano, you could just join everyone else and either wait or pay less. If Turing didn't exist, you'd have been waiting for 7nm till you got new hardware, would that really have made you happier?

Turing isn't meant to be a killer seller, in fact if Turing was priced too competitively and sold too well too early, particularly amongst non-enthusiast circles it could have killed RTRT's perception off the bat(Again, like actual Turing's launch, but so so much worse). NVidia has priced the mainstream out of Turing, and while yes, this likely won't need to happen with 7nm, that's not justification alone for waiting for 7nm to launch this technology.

So you believe that Nvidia NEED to charge what they are charging for Turing in order to turn a reliable, consistent, and fair profit? I agree with your comment, but I find it hard to let go of the fact that Turing is priced the way it is for more than just one reason (because it costs that much).
 
Turing's dies are much bigger than Pascal's at a given branding tier. The top end 80Ti die has a 60% increase in transistor count/area, the 80 tier has a 75% increase, and the 2070's die, three tiers down (80ti/80/70 all launched with chips unique to each other), only just nearly matches the size of the 1080's.
The RTX2070's launch price was $499, the GTX1080's was $549(I don't feel there's a point comparing UK launch prices because the £ & cost of living in general has had double digit percentage changes in that time), so yes I feel the pricing is roughly inline with the likely cost of manufacture for at least part of the stack(Remembering that cost generally rises exponentially with area, and premium products with greater risk & input capital always take larger absolute cuts even if at same %age, and R&D budget also rises with transistor count and whenever new technology & supporting software has to be rolled out).

I've said it before, I think if the GPU arena was more competitive at the moment we probably wouldn't have seen a consumer launch of Turing at all, and probably no TU104/106 dies. The risk & cost of such a part (Got to remember that the cost of those risks often comes early from investors jumping off stocks, something NVidia has had issues with post-Turing (Though also because of their massive 1060 part buildup)), it would have been kept in the developer/prosumer sphere like Volta was. I know some may have preferred that but this route has meant much wider testing for the technologies of the single-digit-nm era & has given much more time for the software stack to mature prior to a widespread distribution. I definitely don't think there is or will be any short term monetary benefit for NVidia from the launch of consumer Turing(Given their already dominant market position and lack of need to take risks, and existing inventory of certain Pascal parts still in critical excess), it's part of a much longer play at making this transition much less painful in the long term, presumably with the hope it will minimise the long term costs of the transition(Now they can take things more or less one game at a time with a slow rollout, the stack can mature stabley and basic principles of efficient & sensible implementations can start to be established; Even BFV saw large double digit jumps in performance within weeks from widespread testing & user feedback, & you can bet there won't be an RTX2060 until the stack is optimised well enough to support at least a few bits of fancy RT being turned on without much impact on performance, at launch there was no way of even knowing which features enabled the larger performance hits in every possible test setup & game scenario).
 
Last edited:
You can't introduce features like this in a clean way, introducing a new feature takes resources & sacrifices, they knew this backlash would happen, everyone in the industry did, because people want the best of both worlds and don't see the whole picture. These people would still complain about the lack of innovation if GPU progress reached CPU levels of progress (Likely what would happen if we kept chasing rasterisation hacks or relying on silicon improvements for performance improvements, rather than pursuing the inherently more accurate & natural technique of RT & the various forms of corner cutting MI has & will continue to introduce), so there really is no pleasing some. The fact is that you can only develop a concept so far on paper, until someone made true RTRT hardware that was at least viable for developing, testing & early gaming it was never going to happen. This is exactly the same scenario we saw with G80/the introductions of programmable shaders almost exactly a decade ago.

If NVidia & AMD were set on having 7nm's launch as the true launch for DXR/MI technologies (Seems to be the case from launch windows of either & the fact die shrinks are traditionally always used for making an architecture wider, and MS said both were developed with input from both companies), then it would have completely killed the technology if both had committed to raw implementations in silicon without any part of the API stack/game engines/ect being tested on it(Imagine Turing's launch but on a far bigger scale). Turing is the blood sacrifice in the same vein of every other expensive, enthusiast-only priced, luxury item that has charged a premium for being the first to market with a forward thinking feature while really existing to knock down technological barriers & exist as a dev tool without bankrupting the company(Turing is likely the most expensive GPU architecture in history when it comes to R&D). People ALWAYS complain about these pieces of technology, but with a capitalist society they have to exist to fund progress without risk of bankrupting the company taking the risk. The idea anyone is forced to do anything when we're talking about luxury items 95% of people would considered themselved priced out of is ludicrous, the fact is Turing only targets a tiny, tiny portion of the enthusiast/luxury gaming market given its pricing structure. If you're looking to drop a grand on a GPU then you may feel like you're "forced" to buy a Turing card, but yano, you could just join everyone else and either wait or pay less. If Turing didn't exist, you'd have been waiting for 7nm till you got new hardware, would that really have made you happier?

Turing isn't meant to be a killer seller, in fact if Turing was priced too competitively and sold too well too early, particularly amongst non-enthusiast circles it could have killed RTRT's perception off the bat(Again, like actual Turing's launch, but so so much worse). NVidia has priced the mainstream out of Turing, and while yes, this likely won't need to happen with 7nm, that's not justification alone for waiting for 7nm to launch this technology.



You are totally missing the obvious -

Turing is a flawed and broken product.

Ray Tracing should never have seen the light of day on Turing as it makes the chips way too expensive. If NVidia had waited another couple of generations and then introduced RT on a smaller node like 7nm the cards would have been much cheaper faster and the take up would have been much better, this would have given the tech a much better chance of getting established quickly. If NVidia had done this there would not have been all the negative comments about price or how slow RT is when on in the latest games. It would have also allowed NVidia to include RTX all the way down the product line into the mid range cards ensuring better market presentation.

NVidia have screwed up really bad with Turing and for once they find themselves out of touch with the buying public.

Don't forget I am speaking as someone who actually owns some Turing cards so I am not biased against NVidia.

Here are my 4 Turing cards.:)

tAs9CXO.jpg


yG60Ahr.jpg
 
You are totally missing the obvious -

Turing is a flawed and broken product.

Ray Tracing should never have seen the light of day on Turing as it makes the chips way too expensive. If NVidia had waited another couple of generations and then introduced RT on a smaller node like 7nm the cards would have been much cheaper faster and the take up would have been much better, this would have given the tech a much better chance of getting established quickly. If NVidia had done this there would not have been all the negative comments about price or how slow RT is when on in the latest games. It would have also allowed NVidia to include RTX all the way down the product line into the mid range cards ensuring better market presentation.

NVidia have screwed up really bad with Turing and for once they find themselves out of touch with the buying public.

Nope, that's just not how hardware rollouts work. If the first time we had RTRT hardware on the market was when 7nm launched, we'd still be in exactly the same situation software wise as we are now(One game barely supporting it & having no principles or technological precedent to use the technology efficiently for allowing use on weaker hardware). Because we're going through that now, with an enthusiast-only set of cards initially, we will already have an established stack & approach to proper & efficient implementations of RTRT & MI hardware *before* it's a technology the masses really have access to or want. This was the only real way to avoid a buggy mess of a full-scale rollout for this technology, think of it as a practice run if you want, but it's a hell of a lot more than that when it comes to establishing the DXR stack. It will likely end up being roughly 5 months between the launch of Turing and a roughly mainstream-priced card hitting market with the RTX2060, so it's not like this will remain an enthusiast-only launch even for Turing's existence, but you never launch new technology that doesn't yet have established principles straight to the mainstream mass-market level, that's a disaster waiting to happen.

The allocation of RT hardware vs traditional shader hardware units on the next generations(Maybe not the first 7nm GPUs but certainly the ones after that) will almost certainly be influenced by how games (& other relevant software) use them, but no one knows what tricks & ideas game/engine/API developers will find useful & what loads they create on real world hardware until they're actually available to test en masse.

I know some people believe that roll outs like this turn them into guinea pigs, but the fact is when you've got technology that relies on the work of tens of companies & research institutions & hundreds of people over thousands of hours, it's never going to be a perfect harmonic creation from the first time all those groups can actually test their work together, and we need limited wide scale rollouts with progressive technology prior to mainstream rollouts increasingly as the scale, complexity, and levels of collaboration required sky rockets.
 
Last edited:
You are totally missing the obvious -

Turing is a flawed and broken product.

Ray Tracing should never have seen the light of day on Turing as it makes the chips way too expensive. If NVidia had waited another couple of generations and then introduced RT on a smaller node like 7nm the cards would have been much cheaper faster and the take up would have been much better, this would have given the tech a much better chance of getting established quickly. If NVidia had done this there would not have been all the negative comments about price or how slow RT is when on in the latest games. It would have also allowed NVidia to include RTX all the way down the product line into the mid range cards ensuring better market presentation.

NVidia have screwed up really bad with Turing and for once they find themselves out of touch with the buying public.

Don't forget I am speaking as someone who actually owns some Turing cards so I am not biased against NVidia.

Here are my 4 Turing cards.:)

tAs9CXO.jpg


yG60Ahr.jpg

Turing is flawed. That's what tgrech is saying. By its very nature it won't appeal to everyone. That is its flaw. I said this months ago but Turing doesn't seem like a consumer product. The consumers who do adopt it are more beta testers, the few elite who are both capable of the investment and eager enough—maybe people like you. The 'real' cards will come later.

I think Nvidia in many ways played it smart. They've suffered a few blows, but people are still going to buy their cards in the troves because they are still the leading manufacturer. I agree with tgrech when he says that if AMD were more competitive and had cards at the high-end that didn't suck, Nvidia might not have introduced Turing in its current form at this time. It might have chosen to segment things, to offer RTRT-less versions. Or maybe just one card with Ray-Tracing, like a Titan. But Nvidia saw an opening in the market. They were so far ahead that they could afford to be the first to the next generation of 3D rendering and could charge what they're charging and not ostracise themselves from investors, partners, and consumers.
 
Nope, that's just not how hardware rollouts work.

No they don't !!!

Manufacturers want to appeal to as large a customer base as possible from day 1, failure to do this can lead to the technology fading into history very quickly.

Also as a game dev why should they waste resources on a tech that only exists on a few cards and the rest of the gaming public laugh at for low fps.

If this tech had been done at a future date in maybe 5 years time it could have launched on everything from the mid range cards upwards at prices that were more reasonable and good for generating a large customer base.

If AMD had some high end cards at 1080 Ti performance level or better on the market NVidia would never never never have dared launch Turing with RT because 99% of people would have voted with their wallets.
 
No they don't !!!

Manufacturers want to appeal to as large a customer base as possible from day 1, failure to do this can lead to the technology fading into history very quickly.

Also as a game dev why should they waste resources on a tech that only exists on a few cards and the rest of the gaming public laugh at for low fps.

If this tech had been done at a future date in maybe 5 years time it could have launched on everything from the mid range cards upwards at prices that were more reasonable and good for generating a large customer base.

If AMD had some high end cards at 1080 Ti performance level or better on the market NVidia would never never never have dared launch Turing with RT because 99% of people would have voted with their wallets.

But there are plenty of examples of that not being true. Electric cars for instance. When they came to market, adoption rate was low. Which was expected. But years later and they're still growing exponentially with little sign of slowing down. People see why electric cars are the future, why they're so important, so even if they can't afford one now or can't justify one at this time, they are glad they are in production and being engineered. It's similar here. People see that ray-tracing is the next evolution of rendering. Even if they can't be that beta tester, they're glad that it's being introduced. I know that's not all 'people' and many of us feel bitter, but I think that's the real nitty-gritty of it. It's just frustrating as consumers.
 
Ray Tracing is the future and Devs will adopt it eventually. Many studios are probably doing the R&D phase at this moment and it's constantly evolving. It'll just take a while. But as everything graphic related, getting closer to life-like graphics is the goal.
 
No they don't !!!

Manufacturers want to appeal to as large a customer base as possible from day 1, failure to do this can lead to the technology fading into history very quickly.


They already do appeal to a large customer base, they still sell Pascal cards from the 1030 to 1060 range that compete reasonably, with some SKUs technically fairly recent. Most high end features that have an associated cost start with premium products regardless of the industry, before trickling down as they become more refined & therefore cheaper. There's no reason to suggest that allocating resources to RTRT would have been seen as any more appealing on mid-low end cards in 5 years time than today, given the software support & level of testing would be just as minimal, and it'd still be using up die space that could have other uses.
Also as a game dev why should they waste resources on a tech that only exists on a few cards and the rest of the gaming public laugh at for low fps.
Many individual devs are working on DXR support, they're just mostly in games coming out over the coming months, but really all it takes for eventual wide spread support to be reached in todays world would be for game engine developers to support it, and the fact it will be a standard feature in the widely used Unreal Engine and Unity game engines(As well as Frostbite which is now used for anything from sports games to racers beyond its BF origins), as well as tools like solidworks & autodesk amongst many more, more or less solidifies it's position as a widespread technology already. NVidia has certainly jumped the gun a little by going as wide as they have with this release from a purely economic standpoint, but only by about 10 months at the most given 7nm timescales, and all that extra support & testing over an exclusively prosumer release could pay dividends when it comes to getting their 7nm hardware out.
 
Last edited:
Back
Top