Nvidia releases RTX filled Control trailer with ray tracing effects

Damn, this game looks sick... For me personally, I find it hard to finding games that appeal to me, but this one certainly did. And this is the actually the first time I personally think that RTX really shows it's potential. All of the previous Battlefield V footages and examples just didn't cut it really... This one on the other hand did though, just damn!
 
Last edited:
I really hope Nvidia's 3000 series pricing comes back down to a sane level, If Nvidia want RTX to take off they need to go back to 10 series pricing, Which was already high.
 
The very top end pricing likely isn't going to change much, if a company can justifiably sell a single card for £1000 they will, but as with Turing that would likely be a server grade chip in a class above the top end of prior generations. What they need to change is the value for money in the low-mid-normal high end, which at the moment they've been incapable of doing because of those huge oversized dies they're having to use in those segments because of all the forward looking features they crammed in, leaving them very vulnerable to AMD who are now taking full advantage of NVidia's tiny profit margins in these segments. Realistically if NVidia's Turing prices don't budge much further anymore moving forward on these parts it's because they've ran out of margin to sacrifice, their Super pricing clearly isn't where they wanted it to be against Navi originally.

Though, there will probably be a big jump forward in RT unit efficiency and allocation with the 3000 series too which will almost certainly bring the kind of jumps where say ~RTX2080 RT performance comes to an ~RTX3060 segment given how we expect raytracing to evolve atm.
 
Last edited:
TBH, Nvidia needs to do more to justify having Tensor cores on consumer chips. As it stands, DLSS isn't cutting the mustard and in most cases it is mainly sitting there as wasted die space.

7nm will do good things for Nvidia with the next generation, especially with the next-gen RTX 2080 Ti CUDA core count equivalent. That card at stock has lower clocks than a lot of the other RTX cards and a big reason for that is heat and power. The RTX 2080 Super has a 300MHz higher base clock and almost 200MHz higher listed boost clock.

It will be very interesting to see what Nvidia has cooking for the next generation. Turing was already a huge leap over Pascal, but the problem with Turing's image is that these changes are forward-looking, which doesn't showcase the cards well in legacy titles.
 
NVidia are still investing a lot in creating a neural network model that can accurately de-noise raytracing images in realtime without obvious artefacting so I don't think Tensor cores are going anywhere yet. They did talk about this application before launch and it's been in Optix since Volta but it seems like first gen Turing didn't really have the grunt to do this properly in realtime yet. I'd assume a big part of it will be down to the loss of accuracy in the scene being quite stark against traditionally rendered parts, so heavier use of RT in world scenes would also help nudge the problem away.
 
Damn, this game looks sick... For me personally, I find it hard to finding games that appeal to me, but this one certainly did. And this is the actually the first time I personally think that RTX really shows it's potential. All of the previous Battlefield V footages and examples just didn't cut it really... This one on the other hand did though, just damn!

Battlefield V, Tomb Raider, and Metro were just last-minute demos added to, pretty much, finished games. Now we will see proper stuff.

I just hope Nvidia will implement NVLink with the next-gen cards. It was designed to eliminate SLI limitations (having the same data in both card's memory). Imagine the horsepower of two 2080Ti (3080Ti) cards having access to each other's resources and working as a single unit. Oh... I would buy 2 of them if the games looked like this or better.
 
NVLink is on the top end Turing cards, but even the full variant atm has nowhere near the bandwidth required for that use case when it comes to gaming, the chopped down on the GeForce cards of course especially doesn't, and even then would require developers to program implicitly for each type of card in this setup via custom DX12 or Vulkan modes, and would have terrible scailing going from NVidia's own testing and research in this area. You'd literally need a crazy expensive optical link for this kind of thing to start to make sense with discrete high end cards in gaming (of course it would be much easier if we could put the chips right next to each other and maybe share a memory controller...).
 
Last edited:
NVLink is on the top end Turing cards, but even the full variant atm has nowhere near the bandwidth required for that use case when it comes to gaming, the chopped down on the GeForce cards of course especially doesn't, and even then would require developers to program implicitly for each type of card in this setup via custom DX12 or Vulkan modes, and would have terrible scailing going from NVidia's own testing and research in this area. You'd literally need a crazy expensive optical link for this kind of thing to start to make sense with discrete high end cards in gaming (of course it would be much easier if we could put the chips right next to each other and maybe share a memory controller...).

Yeah... I maybe overreached with my whishes. But it can be certainly utilized to a much higher level than older ones. DX12 and Vulkan were designed for that purpose, multi-core and multi-GPU support. DX-12 was released 4 years ago. You can develop a full engine from scratch in that time. Look at the id Tech 6 engine. It outperforms everything by a large margin. It should be new engines or nothing. Even though the technology exists games are still constrained by a 10-year-old platform.

AMD pushed developers to utilize more cores. I really hope Microsoft and Nvidia push them away from the 10-year-old DX-11 platform onto DX-12 and DX-R only.
 
Yeah... I maybe overreached with my whishes. But it can be certainly utilized to a much higher level than older ones. DX12 and Vulkan were designed for that purpose, multi-core and multi-GPU support. DX-12 was released 4 years ago. You can develop a full engine from scratch in that time. Look at the id Tech 6 engine. It outperforms everything by a large margin. It should be new engines or nothing. Even though the technology exists games are still constrained by a 10-year-old platform.

AMD pushed developers to utilize more cores. I really hope Microsoft and Nvidia push them away from the 10-year-old DX-11 platform onto DX-12 and DX-R only.

New engine or nothing?! HAHA yeah right. Good luck trying to convince companies to invest tens of millions of dollars to do that. Not to mention at least 3 years.

DX11 is fine for now. It's support, tools, libraries are backed by 10 years of development. Many games are releasing with this and still offer great visuals and performance. There's not much reason to fix what's not broken.

Dx12 and Vulkan still need a lot of work to get running without even mentioning the amount of support and tools are considerably less than DX11.

Stability over latest and greatest technology. That's what every developer follows for large scale projects.
 
New engine or nothing?! HAHA yeah right. Good luck trying to convince companies to invest tens of millions of dollars to do that. Not to mention at least 3 years.

That is what I find strange. Microsoft wants everyone to use Win10. Companies have stayed with DX-11 because more than half of PCs still run Win7. That is a safe move, no argument about that. But instead of going with DX-12 or nothing to force everyone to Win10 (which would be a typical Microsoft move) they reverse-engineered DX-12 to run on Win7.
 
Nah, Microsoft said from the start DX12 wasn't meant to replace DX11, both are meant to coexist and will do for a long time. DX12 simply isn't ready yet for most of the situations where DX11 still does great, MS have done well in developing libraries for DX12 to help bring up to par with DX11 for an average developer but it's still a long way off. DX12 is great for huge, multi-million pound development teams, but even MS say that average developers shouldn't really even consider it unless they need to. Developing a game engine to properly make use of DX12 would be orders of magnitudes more expensive and time consuming than just developing a game on an existing engine, to the point where most titles simply wouldn't be viable at all. Even then, it's only with Turing that the largest manufacturer in PC gaming even had an arch with enough grunt to make good use of DX12, most people still use GTX960's or similar and Maxwell had quite a few known issues that made DX12 a bit of a waste of time for most of the PC market.

DX12 logically should be the last large iteration of Direct3D, at least for a very long time, though. It's a very open baseline to work from going forward.
 
Last edited:
Back
Top