Microsoft reveals their DirectX Raytracing (DXR) for DirectX 12

Wonder how performance will be even though it's only a tiny fraction of real Ray tracing as a whole

It is likely that it won't be great for another hardware generation, until both sides have some form of acceleration. Or at least that is when we can start seeing these features used more widely.

looking forward to seeing this in some games.
 
Yeah that's why I'm anxious about it. If only a $3000 GPU can handle it imagine how everyone else will feel:p

It's not that only a Titan V can handle it, it's just that it can lessen the workload with specific features. If used sparingly it could be possible on today's gaming hardware.

Right now the prevailing theory is that Nvidia is using their Tensor cores for acceleration, which could mean that Tensor cores are coming to gaming GPUs. Harkens back to the Turing rumours, where I and others speculated that AI features/Tensor cores could be coming to Nvidia's gaming hardware.

I wonder if AMD could accelerate this with Rapid Packed Math with FP16 compute, as that would deliver a 2x performance boost right there.
 
Tensor cores would presumably increase the price of a gaming part, wouldn't it?

Well, it would take up die space, making the GPU physically bigger, though that would be counteracted by the move to 12nm, which is the node Volta currently uses.

Nvidia is being very non-specific about Volta and how their changes help with Ray tracing, though they have shown Tensor cores help with that workload before.

Whether or not the use of due space here is worthwhile will depend on how well they are utilised in software, though with the way things seem to be going I'm sure the feature will get used. What remains to be seen is how AMD will handle ray tracing and if AMD has anything that can help with that workload.

AMD has a session about Ray Tracing at GDC this Wednesday, s we should know more soon.

https://overclock3d.net/news/gpu_displays/amd_will_be_hosting_seven_sessions_at_gdc_2018/1

Ray Tracing is a compute heavy task, which could give AMD an advantage (given how compute-focused their GPU architecture is), though that remains to be seen.
 
I'd like to hear SPS talk about it. You should interview him. Our own community has a professional developer, let's pick his brain:)
 
I'd like to hear SPS talk about it. You should interview him. Our own community has a professional developer, let's pick his brain:)

I appreciate the recommendation but by the sounds of it, there's already a lot of big developers involved so I'd expect plenty of information over the GDC period. It looks like pretty awesome stuff though! Definitely going to give this a go.
 
Back
Top