r9 290x with mantle vs 780 ti comparison and purchase help/opinions

Numaholic

New member
Hey everyone. :hello:

I've been contemplating between getting an r9 290x or the 780 ti for my build. It is a fairly high end build with the 4770k and 16 GB of 2400 MHz RAM and I will use it for gaming, some music composition and art type things including starting 3d rendering, photoshopping and drawing.

My arguments for and against are:
-780 ti is more costly than 290x (for me about $730 compared to $900)
-780 ti seems to generally have the higher framerate/better benchmarks, however gap is closing with non-reference 290x
-290x has more memory which may be useful for rendering at higher resolutions
- If mantle kicks off, 290x could be close to or even better than 780 ti in those mantle supported programs/games
-Mantle may allow 290x to be a longer lasting card (more efficiently used) and also may make 780 ti previous gen and 'old' because it does not support mantle

At the moment, I believe there is probably a high chance of mantle kicking off seeing articles where EA look like they are including it in their frostbite engine, as well as Mantle making games easier to port across consoles. (Link to some of this information: http://www.dailyfinance.com/2013/11/25/amds-shift-is-beginning-to-pay-off/)

Now, with the comparison against the cards, according to one review, (link removed) the 780 ti constantly beats the 290x with asus non-reference cards:


(fps)
290x 780ti
Battlefield 83 95.3
Crysis 3 36.9 42.6
Metro Last Light 48 53.7
Unigen Valley 63.8 72.5

However, in comparisons where mantle is definitely used:

Battlefield 4: 290x with mantle - 130 fps 780ti - 145 fps

Which seems quite odd when compared with other comparisons:

Singleplayer Battlefield 4: 290x with mantle - 62.7 fps 780ti - 62.1 fps

multiplayer Battlefield 4: 290x with mantle - 83.9 fps 780ti - 70.2 fps

This conflicting information is fairly confusing, and I'm not sure with nvidia have just not optimised their card for battlefield 4, so it performs worse on that game compared to others. This will make the comparison slightly unfair and will result in the 290x looking better in the comparison than it will actually be across the board.

Anyway, with all this information, I really do not know what to think or what to buy. I would really appreciate other people's opinions on this and which GPU I should purchase.

Thanks!
 
Last edited by a moderator:
Check if the programs you use work with cuda for starters. Mantle isn't really for higher end cpus as well. It comes down to whether you think you'll make use of the 780ti's extra grunt.
 
My advice is to buy for the here and now, ie. don't worry too much about Mantle for the time being. The reason I say this is that it will take quite a while for it to catch on and become successful if it does at all. You're really looking at a few years here.

To be honest, leaving Mantle out of the equation either card will do you fine. In all practicality, you probably won't notice a difference between them where it's gaming or whatever.
 
Whichever is cheapest (if both perform similarly). PP and relevant adobe software will be getting more compute support which will surely help AMD in productivity. Other than that look at what your modelling software is geared towards and get that GPU.

For gaming either card is epic since they're the top of the range from both camps with a price tag to boot. For overclocking, I've read factory overclocked 780ti's will pretty much be the performance you'll be getting whether you tinker with it or not. Hardocp review of the MSI 780ti gaming outlines as much.

As to the 290X, the only one I'd personally buy would be the not yet released MSI 290X Lightning card. Hopefully with the extra juice, better pcb et al (and hopefully cherry picked gpu chip) higher OC's than whats been done so far. Since nVidia wont allow much tinkering with their IP this could get the 290X into the lead. That's if you're into overclocking...

All things considered, it seems you'll be using this for work quite a bit so my tip of the hat goes to the 780ti for cuda and current software support for their hardware. Plus it plays games well!
 
As to the 290X, the only one I'd personally buy would be the not yet released MSI 290X Lightning card. Hopefully with the extra juice, better pcb et al (and hopefully cherry picked gpu chip) higher OC's than whats been done so far. Since nVidia wont allow much tinkering with their IP this could get the 290X into the lead. That's if you're into overclocking...
!

The real beast for OCing (as mentioned if you'd be up for it) is the Sapphire Toxic 290. Thing OCs like mad.
 
hi there I found this hope it helps you with decision


the D3D problem - tying the CPU up

AMD has made a big deal about Mantle - a co-developed graphics application programming interface (API) that redresses some of what the company believes to be lacking in DirectX 11. You see, while DirectX is great as a vendor-agnostic API, AMD is at pains to point out that, right now, significant CPU overhead is incurred by having the driver translate commands from the API to ones the GPU can understand and action.
And it's clearly not just about the driver. Modern games ask the GPU to render complex scenes that require the CPU to fulfill lots of draw calls (or commands) per frame. The purpose of these calls is to tell the GPU to draw an object, or to do some new work. The parallel power of cutting-edge GPUs is such that thousands of draw calls are required to keep them busy and efficient, putting the onus on the CPU to mete them out.
The process works by having the calls passed on from the application, to the API, and then to the graphics driver, but running this via DirectX can add inefficiencies and additional rendering time along the way, causing performance to be bound by the ability of the CPU rather than the intrinsic power available from the GPU.
DirectX has markedly improved in this respect - DirectX 9 (and previous) APIs were horrible in this regard - but the current mechanism by which draw calls are sent over and understood by the GPU inhibits the ability of mainstream CPUs to deliver adequate performance to quality graphics cards. This situation is particularly prevalent when dealing with high-end cards, and the real-world upshot is stifled performance that, in theory, could be made better via a more-efficient API that facilitates the passing of draw calls to a greater degree.
What's needed, therefore, is a console-like API that provides lower-level, less-overhead access from the CPU to the GPU, enabling the latter to work more efficiently at rendering high frame rates by giving the GPU the code it needs in an easy-to-execute manner. In a nutshell, this is part of what Mantle is, and it has been implemented in the latest patch of Battlefield 4.

Being an AMD technology, Mantle works with the company's GCN-based products, that is, discrete Radeon R9/R7/HD 7000/HD 8000 cards and the IGP in the latest Kaveri GPU. That said, the publicly-available driver is only optimised for the Radeon R290-series of GPUs. How it's run

Once the Catalyst 14.1 beta 6 Mantle-supporting drivers are installed and Battlefield has been patched up to the latest build as at February 4, flitting between the DX11 and Mantle APIs is a simple matter of changing one setting in the video section, exiting the game and then loading it back up.



The most commonly-used application for measuring frame-rates is FRAPS. It works by collecting data in the DirectX pipeline and then logging it into files that provide a number of performance variables. Newer, more-advanced tools, such as Nvidia's FCAT, strive for greater accuracy, but for a single-GPU system, the FRAPS output is as good as anything else available.
FRAPS, however, isn't compatible with the Mantle API - it is designed for DirectX, after all - so the folks over at DICE have added a few shortcuts that enable logging. Appreciating the Mantle patch and driver are available, all you need to do, to try it yourself, is bring up the console and type 'PerfOverlay.FrameFileLogEnable 1' to start logging and then 'PerfOverlay.FrameFileLogEnable 0' to finish. BF4 spits out log-files that can be used to calculate the effective frames-per-second metric. Cursory examination reveals no obvious image-quality differences when running either code-path.
Expectations

The obvious expectation is for the Mantle API to run faster than DirectX11 when a mainstream (draw-call-limited) CPU is paired with a high-end card. AMD cites examples such as A10-7700K APU and Radeon R290X graphics, though it's doubtful that most enthusiasts would consider such a combination. Switch over to a faster, better CPU, such as the Intel Core i7-4770K, and the performance uplift potential is limited - the CPU's innate power is enough to overcome the DX11-induced hurdles. Here, AMD's own testing shows, as expected, limited performance increases when switching APIs.
We chose to test Battlefield 4 with two combinations. On the one hand we have an AMD A10-7850K Kaveri APU tied to a Radeon R290X, and on the other, an Intel Core i5-4670K with the same card. The Intel system is also run with a reference GeForce GTX 780 Ti, to see how Nvidia's card fares against the Mantle-run R290X. Testing is done on Windows 8.1, 64-bit, and save for differences in CPU and motherboard, both system are otherwise identical.
Results


Mantle1.jpg
Our map uses an outdoor scene that's considered CPU-limited for the most part. Baseline performance is the A10-7850K and Radeon R290X running via good ol' DX11. The score's actually decent for 1,920x1,080 and ultra-quality settings. Invoking the Mantle path increases performance by around 10 per cent.
What's more telling is that a Core i5-4670K is faster than the Mantle-infused A10-7850K when using regular DirectX. Running Mantle has less of a positive effect, most likely due to the beefier processing and draw-call ability of the Intel chip, but there's still a repeatable increase.
Nvidia fans will point to the fact that a regular GTX 780 Ti is comfortably faster than the Mantle-driven Radeon R290X in our scene. Sure, we could artificially engineer situations where Mantle performance looks better by finding particularly CPU-limited scenes that are rife with large structures collapsing, as AMD has likely done in its benchmarking notes, but doing so would be disingenuous.
Initial conclusions


Battlefield 4 is the poster-child for AMD's Mantle technology. Though the theory surrounding the new API makes a lot of inherent sense, practicalities mean that genuine increases in performance require pairing a mainstream CPU with an expensive graphics card. Anything less than this eclectic combination and gains are unlikely to be perceivable at what we'd term high-quality video settings.
Perhaps the biggest stumbling block for AMD right now is that a roughly-comparable CPU from Intel has enough power to run the same high-end graphics card at a faster pace, via DirectX, than AMD can manage through Mantle.
Mantle has to demonstrate significant speed-ups over DirectX if it is to be taken on by a broad range of developers and games engines. Coding for Mantle compatibility requires additional resources that smaller studios are likely unable to burden. Mantle has some good ideas on how to reduce resource overhead and enable a better gaming experience, so it may be incumbent on Microsoft to learn a few lessons from this vendor-specific API and roll them into the next iteration of DirectX. Such a move would work on all fronts, giving AMD claim for invigorating the industry and all gamers extra performance for free.
We'll be looking into Mantle performance across a larger number of CPUs and GPUs in the near future. AMD says that Mantle is very much a work in progress, but from what we have seen thus far, there's potential to elevate performance for mass-market PCs... which is a good a reason to have Mantle as any.
 
Thanks for all the replies.

It seems that:
- Mantle will not provide as much of a performance boost because of high end CPU
-It will take Mantle a few years to come into much more significant play
-Even if it does come in, the 780ti will still be just as good
-I'm using 3ds max for rendering and that works well with nVidia and I can make use of Cuda
-In reality, the noticeable difference in games won't really be there
-The benchmarks chosen to show off Mantle could have been bias and been more CPU intensive scenes (as well as more mainstream CPU)

I think the difference between the 290x and 780ti in gaming won't really be a factor, however, in 3d rendering the 780ti will probably win out.
 
Thanks for all the replies.

It seems that:
- Mantle will not provide as much of a performance boost because of high end CPU
-It will take Mantle a few years to come into much more significant play
-Even if it does come in, the 780ti will still be just as good
-I'm using 3ds max for rendering and that works well with nVidia and I can make use of Cuda
-In reality, the noticeable difference in games won't really be there
-The benchmarks chosen to show off Mantle could have been bias and been more CPU intensive scenes (as well as more mainstream CPU)

I think the difference between the 290x and 780ti in gaming won't really be a factor, however, in 3d rendering the 780ti will probably win out.

The way to view Mantle is even if it is a success (which I don't think will happen) all todays cards will be obsolete by the time Mantle is mainstream in 2 or 3 years time.
 
Back
Top