Nvidia's Next Driver To Give Mantle Type Gains

Damien c

Active member
So Nvidia have a not yet released driver that they say will give similar performance gains to Mantle but these performances gains, are done in DirectX 11.

http://www.pcper.com/reviews/Graphics-Cards/NVIDIA-Talks-DX12-DX11-Efficiency-Improvements

NVIDIA shows this another way by also including AMD Mantle. Using the StarSwarm demo, built specifically for Mantle evaluation, NVIDIA’s GTX 780 Ti with progressive driver releases sees a significant shift in relation to AMD. Let’s focus just on D3D11 results – the first AMD R9 290X score and then the successive NVIDIA results. Out the gate, the GTX 780 Ti is faster than the 290X even using the R331 driver. If you move forward to the R334 and the unreleased driver you see improvements of 57% pushing NVIDIA’s card much higher than the R9 290X using DX11.

If you include Mantle in the picture, it improves performance on the R9 290X by 87% - a HUGE amount! That result was able to push the StarSwarm performance past that of the GTX 780 Ti with the R331 and R334 drivers but isn’t enough to stay in front of the upcoming release.

Thief, the latest Mantle-based game release, shows a similar story; an advantage for AMD (using driver version 14.2) over the GTX 780 Ti with R331 and R334, but NVIDIA’s card taking the lead (albeit by a small percentage) with the upcoming driver.
Will be interesting to do some game tests and benchmarks tests.
 
This gonna be interesting :)

I love how PC gaming is finally getting the much needed advancments and optimizations to improve performance. Mantle, DX12, improved OGL, that other API some other group is working on and now this improvement with DX11. The next couple of years are going to see some good changes for all.
 
that would be quite impressive but i'll take that with a grain of salt.
i doubt it will keep up with mantle, DX11 has been around for quite some time, such performance increases usually don't happen at this point in time.
if they manage to do this AMD will look like morons.
 
tbh if they manage to do this, nVidia will look like morons... why would it take this long to get that kind of performance gain from drivers? Doesn't this just show how much they've been slacking?
 
tbh if they manage to do this, nVidia will look like morons... why would it take this long to get that kind of performance gain from drivers? Doesn't this just show how much they've been slacking?

Either that or they have been holding back on purpose, it is a bit strange that after all this time they suddenly unlock performance in DX11. Plus if the improvements are within DX11 then I would of thought AMD could utilize the same improvements as well.

I doubt AMD, OGL and MS would create new APIs if it could all be done in DX11 though.
 
tbh if they manage to do this, nVidia will look like morons... why would it take this long to get that kind of performance gain from drivers? Doesn't this just show how much they've been slacking?

This is exactly what I was thinking...
 
NVIDIA always over promise. I'll believe it when I see it. And this is coming from someone that has been using their products since the 6800GT a decade ago.

(6800GT, 7800GT, 8800GTX, GTX 260, GTX 480, GTX 780, all the desktop NVIDIA cards I've had).
 
tbh if they manage to do this, nVidia will look like morons... why would it take this long to get that kind of performance gain from drivers? Doesn't this just show how much they've been slacking?

i suppose both manufacturers will look like morons then.
and yes they have been slacking, what's the point of delivering something better if there is no need to deliver it.
 
Starswarm is not even a good benchmark. Its far from consistent.
Plus Oxide have said they optimized DX11 far more than Mantle. So Nvidia really only just optimized for the benchmark.. not exactly for everything else.

All the benchmark is good for is seeing relative gains. Not as much for consistent repeatable scores.
 
I think this should have a healthy dose of scepticism just as I had for Mantle.

What makes me question this as maybe being true, is to just look at the chips from a pure 'transistor-on-silicon' perspective, the performance gains have been in crappy steps compared to how much more hardware has been thrown at it.

The maths just doesn't add up (to me at least), unless there really is a true diminishing return. But that has never really been the case until recent years, or at least it's never been shown that way. When a new GPU came out that had x more whatever shaders etc, then it damn well did that much better, none of this 30% increase crap like we get with these £500 ~ £1000 SKU's.

I really wouldn't be surprised if they had been intentionally (or otherwise) handicapped so that they can re-badge the SKU's etc as they have been doing and 'unlocking' more potential with future drivers etc etc.

I mean just look at what a friggin' Xbox 360 can do TODAY.. compared to equivalent hardware it's pretty crazy. Now before you say it, I know it's done differently, different software, different OS, different way of talking to the chips, but it's the same GRUNT, the difference is the software that's utilising it.

What really, is stopping PC's with the flexibility that they have, from harnessing that raw power properly?

You get what I mean. :P
 
Its not like Nvidia have been in a league of their own. AMD gpus have been as fast or better until recently (thanks crypto currency). My point is, why would they do this now as opposed to when the 680/7970 were neck and neck in performance? Surely that would have been a better time? I call either a BS PR stunt (Nvidia aren't below this, lets be honest) or just a large amount of optimization in games/benchmarks that AMD gpus do better in.
 
The way I see it, even if it is true AMD will be able to get the same improvements.
Seen as what Nvidia are claiming is just performance unlocked within DX11 itself and not through anything Nvidia have done, so there is no reason why AMD wouldn't be able to do the same.

It is all a bit strange that all of a sudden this comes about, right at the time when Mantle is in the press. It would be great if Nvidia can actually do what they are saying but it is all a bit fishy when you look at the timing of it, the circumstances surrounding it and why it has taken until now for them to do it.

I'd also like to know what 290 they are using in those graphs they showed. If it is a ref 290 then performance is gimped due to thermal throttling and the difference with just the thermal throttling could make up for the extra performance Nvidia are claiming.

It is also a bit strange that they are using the the AMD 14.2 drivers for Thief when the 14.3 are the drivers that were released along side the Thief Mantle patch which gave Mantle optimization and True Audio support.
 
Last edited:
It is also a bit strange that they are using the the AMD 14.2 drivers for Thief when the 14.3 are the drivers that were released along side the Thief Mantle patch which gave Mantle optimization and True Audio support.

Because it makes Nvidia look better. I'm pretty sure the mantle version of Thief would make their gains not look that impressive.
 
It's probably mostly PR but I will be doing some testing anyway when it's released to see if there really is any improvements.
 
I think that DX12 is just closing the Overhead CPU ga between DX and matle so that they could sell more Graphiks to us...DX13 is the next Time to watch out...
 
I think that DX12 is just closing the Overhead CPU ga between DX and matle so that they could sell more Graphiks to us...DX13 is the next Time to watch out...

Microsoft is developing DX12 and they have no interest in selling us GPUs.
This is a very important step forward in performance gain, nobody knows what DX13 possibly will offer.
 
Back
Top