I doubt Turing will be used for the smaller designs; While they could get away with using binned version at the top end of their gaming lineup, this massive die size clearly won't be economical for most gaming cards, and that layout clearly doesn't prioritise performance on any game currently released or expected this year as a function of die size. I assume RTX technology wouldn't make sense on many lower end gaming cards this generation from an economy perspective. In theory they could release a "GTX" die around half the size while maintaining the same performance in almost every game out.
Hmm, maybe. Regardless, Nvidia will want to push Ray Tracing as hard as possible. If they can get a decent number of games to use it, AMD would be left in a terrible position. Nvidia has an incentive to get as many RTX compatible cards out there as they can.
RTX can be used with Microsoft's DXR (DirectX Ray Tracing) API, so they can even say that they are using an industry standard to get around the usual "proprietary standard" complaints. RTX isn't another PhysX, AMD could make a Ray Tracing Accelerating add-on if they wanted to (who knows how long that would take though).
Nvidia's Gross Margins are currently over 64.5%, which is insane for a company with no fabs. For perspective, Intel's Gross margin is 61.2% (they own their fabs) and AMD's are 37%. Nvidia can afford to waste a little die space.
I agree that the tech will be wasteful on low-end cards, perhaps 2050/ti grade or lower, but I expect GTX/RTX (or whatever) 2060-grade cards and over to have both Tensor and RTX cores. Nvidia needs to say that they have more than just compute performance, and if devs use that, they win.
AMD hasn't revealed their alternative to RTX, if they even have one, making every game that uses it an automatic win for Nvidia and a reason for people to upgrade. We already know that the next Metro will feature Ray Tracing elements.