Gigabyte releases statement regarding SP-CAP/MLCC Capacitors on GeForce RTX 3080 GPUs

Doesnt EVGA counter this arguement and state they have tested and verified this to be the cause?

I wonder if Gigabyte are defending themselves because we see what components they are using

I found this list so am curious how well it correlates to those with issues.

source from EVGA Jacob incase others want to click on the links within the table.

https://new.reddit.com/r/nvidia/com...?utm_source=reddit&utm_medium=web2x&context=3
 

Attachments

  • GPU list.jpg
    GPU list.jpg
    39.1 KB · Views: 60
It could be the cause(Or a major aggravating factor) for EVGA and not the cause for Gigabyte, depends on the rest of their power circuitry, also depends on which type of tantalum caps they selected ofc, Gigabyte confirm here they have some pretty high end caps, and well selected for high frequency filtering
 
Last edited:
Doesnt EVGA counter this arguement and state they have tested and verified this to be the cause?

I wonder if Gigabyte are defending themselves because we see what components they are using

I found this list so am curious how well it correlates to those with issues.

source from EVGA Jacob incase others want to click on the links within the table.

https://new.reddit.com/r/nvidia/com...?utm_source=reddit&utm_medium=web2x&context=3

More like EVGA is just saying what people wanted to hear to shut them up for 5 minutes while nvidia pushed out the driver update. Stupid move tbh.
 
Doesnt EVGA counter this arguement and state they have tested and verified this to be the cause?

I wonder if Gigabyte are defending themselves because we see what components they are using

I found this list so am curious how well it correlates to those with issues.

source from EVGA Jacob incase others want to click on the links within the table.

https://new.reddit.com/r/nvidia/com...?utm_source=reddit&utm_medium=web2x&context=3

It is worth noting that there are multiple ways to fix some problems. EVGA wanted to address the problem without Nvidia drivers and changed components to do that.

This isn't a matter of MLCCs are better. We do not know what caps EVGA were using, and we, therefore, cannot discuss their quality.

They changed to something better, but that doesn't mean that a fix couldn't have been implemented with better SP-CAPs. It's a very complicated issue.

I'm no expert, but the situation is not just about MLCCs vs CP-CAPs. A lot of the blame here lies with Nvidia and their early drivers.
 
More like EVGA is just saying what people wanted to hear to shut them up for 5 minutes while nvidia pushed out the driver update. Stupid move tbh.

That is also a stupid thing to say. You think EVGA want to tarnish their own rep as imo the best Nvidia GPU cards on the market?

Of course not. Why would they admit to being "cheap". They could have easily said its not their cards its the driver, and shift blame elsewhere which is often the case for big manufacturers. Also, why would MSI and ASUS modify their next cards to be released if there wasnt a modicum of truth behind it.
 
That is also a stupid thing to say. You think EVGA want to tarnish their own rep as imo the best Nvidia GPU cards on the market?

Of course not. Why would they admit to being "cheap". They could have easily said its not their cards its the driver, and shift blame elsewhere which is often the case for big manufacturers. Also, why would MSI and ASUS modify their next cards to be released if there wasnt a modicum of truth behind it.

As far as ASUS are concerned, all of their press and retail samples used MLCC capacitors. The images used on their website were from a pre-production sample.

While some a reporting that ASUS changed their designs, they haven't. It is common for engineering samples to be used for early images, and aspects of those designs change.

ASUS is not "modifying their next cards". I can't speak for MSI though.
 
Yeah I think the "Cheap" or "bad" thing is from people misunderstanding what is meant when some sites like Igors say "MLCC caps are better". For this context, what is traditionally considered the most important electrical characteristics for clean high frequency filtering can undoubtedly be found from using MLCC caps, but clearly some vendors believed they'd need or benefit from a much higher capacitance than usual, due to the potentially larger spikes and loads one would assume, and in this case the much larger sizes of Tantalum may have drawn some engineers to decide the higher impedance and ESR of Tantalum caps were worthwhile compromises for higher capacitance at a stable cost as long as these values still remained within NVidia's specs, so they went against "traditional wisdom" and in the mean time may have trimmed a few cents off the BOMs and made their boss happy.

Higher ESR cap groups like this in this use case essentially means your filtering is less effective, and you get noisier power outputs, that much you can take in isolation as (An oversimplified) fact. But how much noise getting through is okay? That much depends on the rest of the power circuitry, the power input, the load of each chip, ect. Can you get around the issue in other ways? Yep, the obvious one is to nudge up the voltage causing the issue so the noise becomes less significant, which I'd take a wild guess is what these drivers do, but there are many other possible approaches.

But at the end of the day, NVidia's original specifications shouldn't have produced broken boards. These vendors kept well within NVidia's claimed ESR requirements and still had issues. There's only way place blame can be allocated imo
 
Last edited:
(Without having actually read capacitor datasheets) I'm not sure I totally buy gigabytes statement... having more capacitance doesn't necessarily make a card better because higher capacitance capacitors usually have a different graph of internal resistance or inductance vs operating frequency, even if they have a lower rated resistance or inductance that might only be true for a range of frequencies outside of where the card becomes unstable so that extra capacitance can't actually be used. This is why the MLCC recommendation exists as MLCC by nature have better high frequency characteristics.
 
Doesnt EVGA counter this arguement and state they have tested and verified this to be the cause?

I wonder if Gigabyte are defending themselves because we see what components they are using

I found this list so am curious how well it correlates to those with issues.

source from EVGA Jacob incase others want to click on the links within the table.

https://new.reddit.com/r/nvidia/com...?utm_source=reddit&utm_medium=web2x&context=3

EVGA originally used 6x220uF caps. That's 1320uF compared to GB's 2820uF. The same MLCC caps are 47uF so a bank of those is 470uF. Yes having some MLCC is better to improve transient response as they react more quickly but 6x470uF is still pretty good.

Check out Debauers latest video, on the GB card he swapped 2 of them out for MLCC banks and it allowed him to get another 30MHz overclock.
 
Last edited:
Well it looks like gigabyte wasn't telling the truth. Der8auer removed 2 of the sp caps and put mlcc's instead and got another 40mhz out of it. It was a high overclock though and the 6 sp caps would have been adequate if the card was left at stock. He the card did overclock better once the change was made though so... The fact that they say, "it is not true to assert that one capacitor type is better than the other" is laughable.
 
It is possible that Gigabyte is just giving a "it's not the caps, it's the drivers" just to push the focus off themselves - i.e. Gigabyte did everything by Nvidia's book so it isn't their fault. It maybe possible that the lower end specifications from Nvidia are lacking the filtering required to stably push the average 3080/3090 GPU to 2GHz+. The 30 series GPUs (so far) look to be pushing the boundaries of what the design is capable of so having noise might be enough to push them over the edge when boosting.

From what I understand (I do know a fair bit of electronics but nothing specialised enough to confidently be able to back up this statement with my own knowledge), different types of capacitors have different types of filtering capabilities, e.g. SP caps might be better at filtering noise in the low megahertz range while MLCCs might be better at filter noise in the high megahertz range (just examples, I would have to do some research to give definite examples). I trust what Buildzoid says and that is what I got from his video regarding this issue.
 
Well it looks like gigabyte wasn't telling the truth. Der8auer removed 2 of the sp caps and put mlcc's instead and got another 40mhz out of it. It was a high overclock though and the 6 sp caps would have been adequate if the card was left at stock. He the card did overclock better once the change was made though so... The fact that they say, "it is not true to assert that one capacitor type is better than the other" is laughable.

That statement is technically true though, tantalum caps are better for some situations while MLCC are better for others, e.g. capacitors lose some capacitance as DC voltage increases, MLCC's are pretty bad in this respect where tantalums aren't affected so much. This is why it might actually better to have some MLCC and some tantalum and not all MLCC (as asus have done, purely for marketing since plenty of 3080 buyers are now thinking "tanalums are evil and mlcc is king!")
 
Look folks; there is clearly a reason why most high-end cards are using MLCCs for at least their central capacitors.

If de8auer got an extra 30-40 MHz, good for him, but when these GPUs can go over 2000MHz, that is a difference of less than 2%. You will not notice that difference in games.

Remember that Gigabyte's Gaming OC and Eagle OC models are not their premium releases and that non-premium models are designed to run at their listed specs, not for overclocking.

The main point here is that configurations like Gigabyte's Gaming OC and Eagle OC work.

I'm going to watch the der8auer video now, but I am curious if his testing used Nvidia's latest drivers. If not, would that 30-40MHz gap exist if it were using those drivers?
 
Yeah, at the end of the day when you're picking out a capacitor you're looking at quite a wide range of properties, many of which will vary depending on environment or inputs, often in terms of both signal magnitude and frequency. If the "typically problematic" aspects of tantalum caps were still within NVidia's stated specs with the right models then you would start to consider their upsides a little more seriously, such as their "guaranteed" big jumps in available capacitance, in the sense that they have no comparable drop off under higher temperatures or loads.

Ofc you can get modern somewhat economical X8x rated MLCC caps that are more than stable enough to ensure they'll be useful at any temperature or input, but tbf you could still lose 40% capacitance with a more common (But high end for MLCC) X8R cap if pushing it on temperatures and loads(Though ofc realistically here you're never going to reach those kinds of conditions without the GPU frying first, you'd still expect a chunky loss with most models at say 110C though), I think there's definitely an argument in favour of a mix being potentially more stable in a wider range of scenarios, if these chips need a high capacitance filter.
 
Last edited:
Just watched the der8auer video.

Based on what I can see, the crashes were caused when GPU boost suddenly spiking over 150MHz over what the clock speed average was for that workload. That looks like a problem with Nvidia's GPU boost tech, not capacitors.

My thoughts now are that Nvidia's boost algorithm was a little naff with Ampere and that the "better" capacitor designs somewhat countered this problem. Now it makes total sense that drivers have fixed this problem. Fix the clock spikes that made some cards unstable, and fix the problem.
 
Look folks; there is clearly a reason why most high-end cards are using MLCCs for at least their central capacitors.

If de8auer got an extra 30-40 MHz, good for him, but when these GPUs can go over 2000MHz, that is a difference of less than 2%. You will not notice that difference in games.

Remember that Gigabyte's Gaming OC and Eagle OC models are not their premium releases and that non-premium models are designed to run at their listed specs, not for overclocking.

The main point here is that configurations like Gigabyte's Gaming OC and Eagle OC work.

I'm going to watch the der8auer video now, but I am curious if his testing used Nvidia's latest drivers. If not, would that 30-40MHz gap exist if it were using those drivers?

Yeah I hate to throw a spanner in the works for those talking about Der8auer but loads of people on OCUK noted better clocks after the driver update.

Some had clocks 30mhz lower, some had clocks 30mhz higher. Sound familiar?

The GPU is now getting more power, allowing better dies to boost higher, and it is obviously sniffing out the die and making poorer ones clock lower for stability.
 
Well if it's sudden bursts in clock speed causing the issues that would explain why MLCC caps handled things better, big sudden load and frequency changes = sharper spikes = higher effective frequencies to be filtered, higher ESR of Tantalum caps would mean they become much less effective (Essentially bypassed) with very high frequency bursts.
 
Last edited:
Yeah I hate to throw a spanner in the works for those talking about Der8auer but loads of people on OCUK noted better clocks after the driver update.

Some had clocks 30mhz lower, some had clocks 30mhz higher. Sound familiar?

The GPU is now getting more power, allowing better dies to boost higher, and it is obviously sniffing out the die and making poorer ones clock lower for stability.

All the driver does is fix GPU Boost. Less large spikes, lower load on capacitors and increased stability.

It's not the clocks that are the problem, it's the spikes. The spikes drain the power from the capacitors and caused instability. Fixing GPU boost in drivers fixed the issue.

Clock spikes caused the capacitors to drain before the main power circuitry could compensate. This was an Nvidia problem, not an AIB problem.
 
Remember that Gigabyte's Gaming OC and Eagle OC models are not their premium releases and that non-premium models are designed to run at their listed specs, not for overclocking.

But isn't the whole issue that a decent number of cards are having issues without being overclocked? It's happening on some cards when they reach around 2GHz which GPU Boost will reach on some cards in default state. I don't think a anyone considers GPU Boost to be overclocking, it's what the card does by itself without any user intervention.
 
Back
Top