Why ? MSI Fuzion

name='Ghosthud1' said:
yup guys its now apparent it is just poor performance!

im guessing its down to the bandwidth :D

Defo nothing to do with bandwidth - its just a poor implementation and bad drivers. Look at Crossfire or SLI on the board - plenty of bandwidth.
 
name='Dublin_Gunner' said:
Defo nothing to do with bandwidth - its just a poor implementation and bad drivers. Look at Crossfire or SLI on the board - plenty of bandwidth.

+1 It's definitely not the bandwith that's the problem :)
 
name='Dublin_Gunner' said:
Defo nothing to do with bandwidth - its just a poor implementation and bad drivers. Look at Crossfire or SLI on the board - plenty of bandwidth.

The Fuzion board doesn't support SLI as such-Just two nVidia GPU's using the Hydra chip. It does however support Crossfire. Maybe the board cost so much that MSI couldn't afford the $5 licence!
 
name='Hemicuda' said:
Maybe the board cost so much that MSI couldn't afford the $5 licence!

Actually, adding SLI support to this motherboard would be rather expensive. Validating SLI is something like $50K per motherboard model, which is kind of a steep price to pay with such a niche board... And Hydra doesn't do that bad on N-mode, so it's like "poor-man's SLI" (which you'll probably be after buying the board... lol)

As for performance, I have to tell you, I was expecting MUCH worse. However, I believe it can get much better with time. AND with extra help from manufacturers (not likely to happen, but we can always hope).

There are two main problems here, I believe: first, Lucid has to "guesstimate" how and what to do with the video data, all the time trying to fit what the cards/drivers expect to happen. That's difficult without manufacturer support.

Second, and much more serious: GPU programming is not as vendor-agnostic as it should be. Ideally, the OS itself should be able to do what Lucid does with Hydra: pushing video data to be processed on the "best" or "most available" GPU, and send the rendered display to the video output. Until that happens, Lucid will have a very steep hill to climb, having to adapt to every GPU generation (and permutation).

That's not to say Lucid should just give up. From what I've read on the last few years, it seems we're slowly getting to that point where the OS might actually be able to handle vendor-agnostic multi-GPU scenarios. If/when that happens, Hydra chips will be able to offload those tasks from the CPU, and probably handle PCIe traffic while they're at it, improving performance.

Until then, however, it's nice to see they're at least doing what they can to keep up.

Cheers..

Miguel
 
I think the idea of Hydra is pretty cool...but then again I have always been skeptical of SLI and Crossfire in general. Untill recently with ATI's new offerings multi gpu setups have just seemed like some marketing gimmick to make enthusiasts like us shell out more for a second or third high end card.
 
When they manage to get good performance it'd be pretty cool, when you're old gpu is outdated just buy a new one and use to old gpu to get some extra performance instead of letting it rot.
 
name='wafelijs' said:
when you're old gpu is outdated just buy a new one and use to old gpu to get some extra performance instead of letting it rot.

That's the best thing about Hydra. I do hope they pull it off conveniently, mixing GPUs is something probably every enthusiast and power user has dreamed of for a LOOONG while now.

However, do keep in mind that usually there are several limitations to keep track of (not limited to Hydra, SLI and CF also suffer from the same nags):

1) Smaller common denominator applies: DX10+DX20 equals a DX10-capable setup, 4GB framebuffer + 1GB framebuffer equals 1GB shared framebuffer (unless, of course, Hydra manages to circumvent this limitation by not using AFR);

2) Unless a radically different rendering method is found, where objects can be rendered independently of frames, and then combined somehow (this is not my area, so I'm not even sure if I just said something very stupid), very different GPUs (in terms of processing power) will probably be counterproductive to the faster card, even to the point of possibly slowing it down. Even Lucid says "please use GPUs with similar speeds, else performance might not be that great". NVIDIA and AMD even refuse multi-GPU combos of too-different cards, even after enabling flexible SLI/CF setups.

Cheers.

Miguel
 
Thought so already, though it would be sick if it was possible to get around those limitations. For now we'll just wait and see how it all develops.
 
Back
Top