Crytek Showcases Real-Time Ray Traced Reflections in CryEngine on RX Vega 56

Oh dear Nvidia's share price has already taken a hell of a beating

Yeah, but barely anyone uses CryEngine and Crytek still needs to show this live. It is very interesting tech, but Crytek still needs to prove themselves.

I can't remember the last game I played with CryEngine, aside from Kingdom Come: Deliverance.
 
D'you know what? I've kinda had this feeling since RT came along that somehow AMD's big fat heater cards would be able to do it somehow. Kinda by throwing brawn at it.

It would also not be the first time that Nvidia have tried to monopolise a technology by coming up with a small lump of hardware that you must have to be able to run something (G-Sync for example).

Like I said ages ago RT will only happen when it's for all, not just a tiny % of half of the market (I don't mean half by actual user figures I mean half as in AMD and Nvidia, two halves make a whole and all that s**t).

That's exactly why Microsoft have just made DX12 work in Windows 7, because they want every one to code for DX12 (probably more importantly for their future consoles).

It could only take one meaty game like Crysis 4 to turn this whole shenanigans on its head.

The last part of what you said in the article is very telling, IMO. This bit...

Crytek has also stated that the technique will be optimised for the latest graphics cards and supported APIs like Vulkan and DirectX 12.

Or in other words "Pascal's gonna suck". God, it's almost like Nvidia knew all of this and rushed out their RTX cards before it happened. And yes, that's proper titty licking tinfoil hat stuff there but it does seem mighty odd ! I can just see Nvidia in the boardroom now...

"Quick ! RT is about to become a thing let's quickly get out a tech unique to us before AMD can come up with anything !"
 
Last edited:
Every graphics can do real time Ray tracing, some better than others. Rtx just has hardware dedicated to it. It was and is just a matter of optimisation of the software. It's no different for any in game effect look at physx, u needed to buy an agiea card at first but then bang, nvidia added it to their gpus as a software overhead, remember it used to drop fps massively, not anymore its a standard (if outdated) feature for their cards
 
Yeah Microsoft said they started open fine tuning of the DXR API between hardware vendors and game engine developers in around Spring 2017 while work on Turing's RT implementation started a couple of years prior of course hardware of this scale takes around 18 months just to go from final designs to shipping products so I think all three expectant GPU hardware vendors have known about each others moves for far longer than any of us and their timing regarding hardware is entirely down to their own assessments on its economic viability and when they expect consumers to accept it.

It's really hard to say what Crytek actually have here and what is prebaked or realtime corrected trickery or similar until they release something that can be verified by a third party, they could have found some really innovative shortcuts to world reflections that will assist with all vendors and allow hardware RT implementations to focus on other areas like refractions and subsurface reflections rather than world reflections which will help a lot with things like water, rain, or human skin(Woo no more plastic people).
 
Last edited:
Every graphics can do real time Ray tracing, some better than others. Rtx just has hardware dedicated to it. It was and is just a matter of optimisation of the software. It's no different for any in game effect look at physx, u needed to buy an agiea card at first but then bang, nvidia added it to their gpus as a software overhead, remember it used to drop fps massively, not anymore its a standard (if outdated) feature for their cards

At first the Physx PPU was put onto the die of Nvidia cards. There used to be a workaround where you could run an AMD card as your main card, then let the Nvidia GPU act as a PPU. The only other option was putting it onto your CPU which tanked the performance.

But yeah, they've had plenty of techs like that. 3Dvision was another one that they just killed off (I wonder why?).
 
The difference of course being that in this case all three GPU vendors have already committed to supporting this API with the only exclusivity occurring through timing, NVidia can't create a wall garden around this feature because it isn't theirs, they've just slapped some branding on top of it.

I think a better comparison is with CUDA, it's not the only programmable/DX10+ GPU shader around but it was the first by a similar margin and so its branding has always stuck around.
 
Last edited:
At first the Physx PPU was put onto the die of Nvidia cards. There used to be a workaround where you could run an AMD card as your main card, then let the Nvidia GPU act as a PPU. The only other option was putting it onto your CPU which tanked the performance.

But yeah, they've had plenty of techs like that. 3Dvision was another one that they just killed off (I wonder why?).

Just adding, some info here, you still can use an Nvidia GPUs for PhysX and an AMD one as you primary, technically not a hack. Also PhysX is not outdated as or fellow brother said in the previous comment, It was updated quite recently with great new additions to the PhysX library actually. And is still better than most built-in game physics.

3D Vision was a bad ideia to begin with, people never liked 3D anything and I don't know why they still go to 3D Cinema. I personally always liked the idea and still own 2 3D TVs but never used them to game as input lag is massive and for some reason you had to either pay for an expensive 3D Vision ready monitor plus glasses or for a special license that allowed you to use on your 3D capable TV. I'm not paying for that! off NVIDIA. So, really just tech that is not used being, killed off. Natural selection one might say.

The PhysX processor is still embedded within NVIDIA GPUs. And no, you can't technically run PhyX on CPU, enabling this option will only let the CPU calculate legacy Phisics code, but the actually impressive parts would not happen as they really need dedicated hardware to be processed. It might still be better than no PhysX at all depending upon how good is the legacy physics the dev put into their game.
 
Last edited:
PhysX PPU's were deprecated with version 2.8 around a decade ago, after DirectX10/fast programmable shaders essentially made the concept useless, NVidia cards have just used a CUDA based SDK since then, nowadays bundled as a GameWorks library. As AMD has long had CUDA compatibility layers, GPU accelerated PhysX has technically been arbitrarily ring-fenced from working on GCN hardware(But this means nothing now because of below).

But PhysX in itself is just an SDK and has no hardware requirements in and of itself, it's been on essentially every game console of the last 20 years and is now just open source software available for Windows, Android, iOS, ect, and most implementations since version 3's major rewrite(That completely removed support for PPUs amongst other things) have been CPU based almost entirely, IE no GPU acceleration layer in the games at all besides a couple of effects.
This is because CPU was essentially just a fallback layer pre 3.0, with very outdated single threaded x87 based legacy code. Post 3.0, PhysX was rewritten entirely with a focus on multi-core CPUs and the CUDA layer took the back seat, and because the CPU would often work better in most systems than using up CUDA cores to push it, fully GPU accelerated PhysX pretty much died 5 years ago.

With PhysX 4.0, the focus is massively on cross-platform experiences, it's now primarily a multi-platform API for multi-core CPUs, designed for anything from ARM cores in smartphones: https://developer.nvidia.com/physx-sdk
 
Last edited:
Back
Top