Go Back   OC3D Forums > [OC3D] General Forums > OC3D News
Reply
 
Thread Tools Display Modes
 
  #1  
Old 19-07-19, 09:43 AM
WYP's Avatar
WYP WYP is offline
News Guru
 
Join Date: Dec 2010
Location: Northern Ireland
Posts: 15,848
Nvidia releases RTX filled Control trailer with ray tracing effects

Will Control be Nvidia's killer app for RTX ray tracing?



Read more about Nvidia's RTX trailer for Control.

__________________
_______________________________
Twitter - @WYP_PC
Reply With Quote
  #2  
Old 19-07-19, 09:53 AM
Dawelio's Avatar
Dawelio Dawelio is offline
OC3D Elite
 
Join Date: Aug 2014
Location: Scandinavia
Posts: 4,762
Damn, this game looks sick... For me personally, I find it hard to finding games that appeal to me, but this one certainly did. And this is the actually the first time I personally think that RTX really shows it's potential. All of the previous Battlefield V footages and examples just didn't cut it really... This one on the other hand did though, just damn!
Reply With Quote
  #3  
Old 19-07-19, 11:04 AM
Dicehunter's Avatar
Dicehunter Dicehunter is offline
Resident Newb
 
Join Date: Oct 2012
Location: Newbsville
Posts: 13,410
I really hope Nvidia's 3000 series pricing comes back down to a sane level, If Nvidia want RTX to take off they need to go back to 10 series pricing, Which was already high.
__________________
Steam: Dicehunter
Origin: Dicehunter
Uplay: Dicehunter
Battle.Net: Dicehunter#2746
Reply With Quote
  #4  
Old 19-07-19, 11:15 AM
tgrech tgrech is offline
OC3D Elite
 
Join Date: Jun 2013
Location: UK
Posts: 1,397
The very top end pricing likely isn't going to change much, if a company can justifiably sell a single card for £1000 they will, but as with Turing that would likely be a server grade chip in a class above the top end of prior generations. What they need to change is the value for money in the low-mid-normal high end, which at the moment they've been incapable of doing because of those huge oversized dies they're having to use in those segments because of all the forward looking features they crammed in, leaving them very vulnerable to AMD who are now taking full advantage of NVidia's tiny profit margins in these segments. Realistically if NVidia's Turing prices don't budge much further anymore moving forward on these parts it's because they've ran out of margin to sacrifice, their Super pricing clearly isn't where they wanted it to be against Navi originally.

Though, there will probably be a big jump forward in RT unit efficiency and allocation with the 3000 series too which will almost certainly bring the kind of jumps where say ~RTX2080 RT performance comes to an ~RTX3060 segment given how we expect raytracing to evolve atm.
Reply With Quote
  #5  
Old 19-07-19, 11:41 AM
WYP's Avatar
WYP WYP is offline
News Guru
 
Join Date: Dec 2010
Location: Northern Ireland
Posts: 15,848
TBH, Nvidia needs to do more to justify having Tensor cores on consumer chips. As it stands, DLSS isn't cutting the mustard and in most cases it is mainly sitting there as wasted die space.

7nm will do good things for Nvidia with the next generation, especially with the next-gen RTX 2080 Ti CUDA core count equivalent. That card at stock has lower clocks than a lot of the other RTX cards and a big reason for that is heat and power. The RTX 2080 Super has a 300MHz higher base clock and almost 200MHz higher listed boost clock.

It will be very interesting to see what Nvidia has cooking for the next generation. Turing was already a huge leap over Pascal, but the problem with Turing's image is that these changes are forward-looking, which doesn't showcase the cards well in legacy titles.
__________________
_______________________________
Twitter - @WYP_PC
Reply With Quote
  #6  
Old 19-07-19, 11:53 AM
tgrech tgrech is offline
OC3D Elite
 
Join Date: Jun 2013
Location: UK
Posts: 1,397
NVidia are still investing a lot in creating a neural network model that can accurately de-noise raytracing images in realtime without obvious artefacting so I don't think Tensor cores are going anywhere yet. They did talk about this application before launch and it's been in Optix since Volta but it seems like first gen Turing didn't really have the grunt to do this properly in realtime yet. I'd assume a big part of it will be down to the loss of accuracy in the scene being quite stark against traditionally rendered parts, so heavier use of RT in world scenes would also help nudge the problem away.
Reply With Quote
  #7  
Old 20-07-19, 07:26 AM
Avet's Avatar
Avet Avet is offline
OC3D Elite
 
Join Date: Dec 2016
Posts: 1,120
Quote:
Originally Posted by Dawelio View Post
Damn, this game looks sick... For me personally, I find it hard to finding games that appeal to me, but this one certainly did. And this is the actually the first time I personally think that RTX really shows it's potential. All of the previous Battlefield V footages and examples just didn't cut it really... This one on the other hand did though, just damn!
Battlefield V, Tomb Raider, and Metro were just last-minute demos added to, pretty much, finished games. Now we will see proper stuff.

I just hope Nvidia will implement NVLink with the next-gen cards. It was designed to eliminate SLI limitations (having the same data in both card's memory). Imagine the horsepower of two 2080Ti (3080Ti) cards having access to each other's resources and working as a single unit. Oh... I would buy 2 of them if the games looked like this or better.
Reply With Quote
  #8  
Old 20-07-19, 09:52 AM
tgrech tgrech is offline
OC3D Elite
 
Join Date: Jun 2013
Location: UK
Posts: 1,397
NVLink is on the top end Turing cards, but even the full variant atm has nowhere near the bandwidth required for that use case when it comes to gaming, the chopped down on the GeForce cards of course especially doesn't, and even then would require developers to program implicitly for each type of card in this setup via custom DX12 or Vulkan modes, and would have terrible scailing going from NVidia's own testing and research in this area. You'd literally need a crazy expensive optical link for this kind of thing to start to make sense with discrete high end cards in gaming (of course it would be much easier if we could put the chips right next to each other and maybe share a memory controller...).
Reply With Quote
  #9  
Old 21-07-19, 07:47 AM
Avet's Avatar
Avet Avet is offline
OC3D Elite
 
Join Date: Dec 2016
Posts: 1,120
Quote:
Originally Posted by tgrech View Post
NVLink is on the top end Turing cards, but even the full variant atm has nowhere near the bandwidth required for that use case when it comes to gaming, the chopped down on the GeForce cards of course especially doesn't, and even then would require developers to program implicitly for each type of card in this setup via custom DX12 or Vulkan modes, and would have terrible scailing going from NVidia's own testing and research in this area. You'd literally need a crazy expensive optical link for this kind of thing to start to make sense with discrete high end cards in gaming (of course it would be much easier if we could put the chips right next to each other and maybe share a memory controller...).
Yeah... I maybe overreached with my whishes. But it can be certainly utilized to a much higher level than older ones. DX12 and Vulkan were designed for that purpose, multi-core and multi-GPU support. DX-12 was released 4 years ago. You can develop a full engine from scratch in that time. Look at the id Tech 6 engine. It outperforms everything by a large margin. It should be new engines or nothing. Even though the technology exists games are still constrained by a 10-year-old platform.

AMD pushed developers to utilize more cores. I really hope Microsoft and Nvidia push them away from the 10-year-old DX-11 platform onto DX-12 and DX-R only.
Reply With Quote
  #10  
Old 21-07-19, 09:43 PM
NeverBackDown NeverBackDown is offline
AMD Enthusiast
 
Join Date: Dec 2012
Location: With the Asguardians of the Galaxy
Posts: 16,195
Quote:
Originally Posted by Avet View Post
Yeah... I maybe overreached with my whishes. But it can be certainly utilized to a much higher level than older ones. DX12 and Vulkan were designed for that purpose, multi-core and multi-GPU support. DX-12 was released 4 years ago. You can develop a full engine from scratch in that time. Look at the id Tech 6 engine. It outperforms everything by a large margin. It should be new engines or nothing. Even though the technology exists games are still constrained by a 10-year-old platform.

AMD pushed developers to utilize more cores. I really hope Microsoft and Nvidia push them away from the 10-year-old DX-11 platform onto DX-12 and DX-R only.
New engine or nothing?! HAHA yeah right. Good luck trying to convince companies to invest tens of millions of dollars to do that. Not to mention at least 3 years.

DX11 is fine for now. It's support, tools, libraries are backed by 10 years of development. Many games are releasing with this and still offer great visuals and performance. There's not much reason to fix what's not broken.

Dx12 and Vulkan still need a lot of work to get running without even mentioning the amount of support and tools are considerably less than DX11.

Stability over latest and greatest technology. That's what every developer follows for large scale projects.
__________________
I am Iron Man.
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump










All times are GMT. The time now is 12:33 AM.
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2019, vBulletin Solutions, Inc.