Nvidia RTX 3080 Founders Edition Review

Will there be more FE Cards?

I cannot tell. On Nvidia shops it just says “sold out” and nothing more.
 
Last edited:
Not sure about FE models, but on new 3080 stock generally Gibbo (OCUK) said this (Square brackets added for context):
(Bear in mind most people couldn't actually get on the site until after 15:30)

This is a guess but I think anyone [who ordered] after 14:05 won't see a card in September, those before 15:00 maybe October but until we get shipment data for October we are in the dark, that information is requested but it seems it won't be available to the graphics card manufacturers until end of this month

Think he'd also said they had about 50 of each card for launch day iirc?
 
Last edited:
But it won't be $700. The 3090 is not THAT much bigger of a die than the 3080, yet it's twice the price. I imagine a large portion of that cost is going to the vast amount of expensive VRAM. The 20GB 3080 is likely going to be $800 or even $850. You won't see the performance benefit of that extra cash in many instances. A lot of people won't deem it necessary, especially if they're at 1440p where 10GB is more than enough.

^ This, this is what I don't get what people are getting fuzzed about regarding the 10GB VRAM. It's plenty for most people today.
 
^ This, this is what I don't get what people are getting fuzzed about regarding the 10GB VRAM. It's plenty for most people today.

Today, sure. I plan on using it for years (4-5) at 1440p so 20 will benefit me later on. The 10 is like the fury with its 4 where many said yeah that is enough for today. It wasn't. ;)
 
^ This, this is what I don't get what people are getting fuzzed about regarding the 10GB VRAM. It's plenty for most people today.

No it isn't.

You need to understand the entire context of Ampere. You absolutely, 100% DO NOT need it for 1440p. In fact, in many instances the 2080Ti is as fast or if you are lucky enough to have yours under a water block it is faster. And yes, the 2080Ti used to cost £1300. Well it doesn't now. They can be had easily for £500, even after the launch.

Ampere suffers badly at 1440p. Not because it is bad at it, but because like 1080p there just isn't a CPU on earth able to occupy the design of it. What I mean is, because they doubled down on CUDA cores (because Samsung 8nm is utter crap) you need to give it a very heavy workload. Very heavy indeed. That is why they were boasting about 8k with the 3090 dude. Because I would strongly imagine it is so powerful with that honking amount of CUDA cores that it will bottleneck CPUs even at 4k.

However. This doesn't mean it is good. If it is so badly bottle necked at 1440p it is a waste of time. A 2080 Super or above is good enough for that mainly, and considerably cheaper.

Ampere is literally a "4k or you are wasting your money" GPU so far, and 10gb of VRAM *is not enough*. Why do you think they are preparing cards with more? think about it. THEY know it is not enough. That's how they got the price down. Ampere is not cheap. As much as they made it sound cheap? it isn't. There are so many catches it's not even funny. Like how because 8nm is so god damn awful they clocked the crap out of the card and it is literally at the limit. And no, no OC scanner is coming and no, there is nothing you can do about it. Der8auer already shunt modded one to give it a bugger load of power and it made no difference.

And they have done all this because they know that otherwise? big Navi would get them in trouble. It's on a lower node (not just by number but by entire design. 8nm Samsung packs about what a 10nm TSMC could) and it will be a monster.

Now when Nvidia return to TSMC after their failed plan to get cheaper stuff? and go 5nm? AMD have had it. They could literally destroy AMD once and for all with that. However, right now they have gone first and their tech is crap (again you need to understand just how bad it is compared to what it would have been on TSMC) and thus they have cut back the VRAM. Otherwise between big Navi and their next tech (at least two years away) they would have nothing.

So you can expect "super" cards, "Ti" cards, cards that double down on VRAM and so on.
 
No it isn't.

You need to understand the entire context of Ampere. You absolutely, 100% DO NOT need it for 1440p. In fact, in many instances the 2080Ti is as fast or if you are lucky enough to have yours under a water block it is faster. And yes, the 2080Ti used to cost £1300. Well it doesn't now. They can be had easily for £500, even after the launch.

Ampere suffers badly at 1440p. Not because it is bad at it, but because like 1080p there just isn't a CPU on earth able to occupy the design of it. What I mean is, because they doubled down on CUDA cores (because Samsung 8nm is utter crap) you need to give it a very heavy workload. Very heavy indeed. That is why they were boasting about 8k with the 3090 dude. Because I would strongly imagine it is so powerful with that honking amount of CUDA cores that it will bottleneck CPUs even at 4k.

However. This doesn't mean it is good. If it is so badly bottle necked at 1440p it is a waste of time. A 2080 Super or above is good enough for that mainly, and considerably cheaper.

Ampere is literally a "4k or you are wasting your money" GPU so far, and 10gb of VRAM *is not enough*. Why do you think they are preparing cards with more? think about it. THEY know it is not enough. That's how they got the price down. Ampere is not cheap. As much as they made it sound cheap? it isn't. There are so many catches it's not even funny. Like how because 8nm is so god damn awful they clocked the crap out of the card and it is literally at the limit. And no, no OC scanner is coming and no, there is nothing you can do about it. Der8auer already shunt modded one to give it a bugger load of power and it made no difference.

And they have done all this because they know that otherwise? big Navi would get them in trouble. It's on a lower node (not just by number but by entire design. 8nm Samsung packs about what a 10nm TSMC could) and it will be a monster.

Now when Nvidia return to TSMC after their failed plan to get cheaper stuff? and go 5nm? AMD have had it. They could literally destroy AMD once and for all with that. However, right now they have gone first and their tech is crap (again you need to understand just how bad it is compared to what it would have been on TSMC) and thus they have cut back the VRAM. Otherwise between big Navi and their next tech (at least two years away) they would have nothing.

So you can expect "super" cards, "Ti" cards, cards that double down on VRAM and so on.

Well that was a mouthfull... What does everyone else in here think?

Do you all agree with Alien on his points. Now I'm not talking regarding his beliefs or so, but the raw facts and points of his post. Regarding Ampere and it's 1440p as well as 4K.
 
3080 is a good purchase for 1440p if you pair it with 10900k, 10850k, or 10700k. If you go with AMD it is a waste of money. AMD CPUs just can't cope with the horse power. Maybe, more like in hope, Zen 3 would come close. PCI-E Gen 4 is still not worth the money. RTX IO is still one year away. Intel Rocket Lake should be out just after the New year. That will be a good purchase for gaming and most productivity tasks. It will make 3080 a long lasting investment.
 
3080 is a good purchase for 1440p if you pair it with 10900k, 10850k, or 10700k. If you go with AMD it is a waste of money. AMD CPUs just can't cope with the horse power. Maybe, more like in hope, Zen 3 would come close. PCI-E Gen 4 is still not worth the money. RTX IO is still one year away. Intel Rocket Lake should be out just after the New year. That will be a good purchase for gaming and most productivity tasks. It will make 3080 a long lasting investment.

Wait what? :huh:... Have I been living under a rock or what, since this just sounded like a whole new newsbomb to me.

Why can't Ryzen cope with the horse power, but Intel can? Especially seeing the massive popularity with Ryzen lately. Seems a bit odd and illogical if they can't cope with 3080 and people still buying Ryzen.

Rocket Lake, when will that be released exactly? And won't that require and entirely new upgrade like always? CPU and motherboard?
 
Wait what? :huh:... Have I been living under a rock or what, since this just sounded like a whole new newsbomb to me.

Why can't Ryzen cope with the horse power, but Intel can? Especially seeing the massive popularity with Ryzen lately. Seems a bit odd and illogical if they can't cope with 3080 and people still buying Ryzen.

Rocket Lake, when will that be released exactly? And won't that require and entirely new upgrade like always? CPU and motherboard?
Intel beating AMD in gaming:
https://www.techpowerup.com/review/nvidia-geforce-rtx-3080-amd-3900-xt-vs-intel-10900k/

Intel beating AMD in productivity. Yes AMD beats intel in rendering tasks but you don't render with CPU you render with GPU using CUDA and OptiX. Observe General Actions. It will be the same in all other AutoDesk apps, Blender, Solidworks...
https://www.pugetsystems.com/labs/a...ndup-Intel-vs-AMD-1812/#GeneralActionsResults

Clear bottleneck by AMD CPUs:
https://youtu.be/0DKVVtirNM8

More myths busted:
https://youtu.be/1kK6CBJdmug

It is just AMD hype, "PCI-E Gen 4", a bit better pricing, and misrepresented rendering benchmarks that make AMD CPUs look better in apps like Blender where Intel actually beats AMD in viewport performance and general tasks.

Intel's mainstream CPUs are better for gaming and all but few production tasks (Adobe Premiere).

Threadripper is king for heavy-duty stuff, but X570 is behind Intel in pretty much everything.
 
Last edited:
Well that was a mouthfull... What does everyone else in here think?

Do you all agree with Alien on his points. Now I'm not talking regarding his beliefs or so, but the raw facts and points of his post. Regarding Ampere and it's 1440p as well as 4K.

They are not points. They are facts. Seriously if you want to learn you need to educate yourself on how GPU technology works dude.

https://www.youtube.com/watch?v=nJBggUfYozY&ab_channel=AdoredTV

Is a good way to start, and to understand why Ampere is what it is.

As for Intel being "better" than AMD? at 1080p and now 1440p they are for Ampere and Ampere only. In gaming only, as they still get their asses whooped at anything else.

And just because Intel are better at those resolutions for Ampere it does not mean they are the best. It just means that for gaming they bottleneck the 3080 less.

What we need now are for Intel and AMD (I predict the latter *cough October*) to step up and start making CPUs that match.

Until then unless you are a 4k die hard avoid Ampere for now at least. Even with the 20gb models they will still be nowhere near as good as Nvidia back on TSMC with a shrink. IE - take a lot of the decent Youtuber's advice and if you have a 2080Ti or Super skip this round, wait for AMD and then make a better more informed decision.

If it were 6 months or more? I would tell you to grab one. But one month (and let's face it it will be longer than that before you can get an Ampere any way) is just daft not to wait.

I said before how it drives me nuts when people buy GPUs not suited to the task they are buying them for. If you use 4k? 3080. However, it comes with caveats like a very questionable amount of VRAM for next gen titles. Remember, next gen titles. They'll be nothing like you have seen yet.
 
Well to anyone thinking about the 3090, i'd wait for reviews, idk if the leaked benchmarks coming out are right or not but 10% doesn't seem worth it for £1500 unless you really need that vram.

As for Intel they are going to be in for a shock on the 8th
 
Well to anyone thinking about the 3090, i'd wait for reviews, idk if the leaked benchmarks coming out are right or not but 10% doesn't seem worth it for £1500 unless you really need that vram.

As for Intel they are going to be in for a shock on the 8th

I did look at that claim about 10% and figures look dodgy.

I think the 3090 will come in about 20% faster than a 3080 @2160p, lower resolutions will be CPU bottlenecked.

The question being is 20% faster enough for the very high price tag?
 
Nvidia did say and market it that the 3080 is their flagship card. The 3090 is more for prosumer use and above where it's memory is useful.

So for gaming alone it's not worth it.
 
I did look at that claim about 10% and figures look dodgy.

I think the 3090 will come in about 20% faster than a 3080 @2160p, lower resolutions will be CPU bottlenecked.

The question being is 20% faster enough for the very high price tag?

Also have to take the 24GB of memory into account too plus the much larger cooler.

On other points raised, kind feel people need to stop giving NV a hard time about this launch and blame the scalpers and people using bots (one person got 42 cards using a bot program apparently).

Also, 10GB is more than enough VRAM. Unfortunately a lot of people will look at readings shown by GPUz or other similar programs, see close to 11GB usage (for a 2080Ti) and assume this is the memory being used without understanding the difference in use of the memory and it just be allocated/requested. You also have to take RTX IO into account which should also help.
 
@AlienALX where can I read up on how good/bad Samsung 8nm is, especially compared to TSMC?

Interesting points of views and references to proven facts guys. I'd like to partake a bit for own situation (see system specs in my sig): what would you do in my case for pure gaming without changing anything else (Intel + mobo are way overpriced right now and I don't really need either)? What GPU generation / model / VRAM configuration? Keeping in kind I plan on keeping the GPU for a few years.

Specs in brief:

EVGA 850W P2 PSU
Asus M10H board
8700K @ 4.9GHz
16GB 3333MHz
1440p G-Sync ISP display
Current GPU: Asus 1080 Strix

Edit: I only buy new, no 2nd hand. And the new features like IO and DLSS 2.0 may also factor in because you're also buying into new features with the new future products.

Everyone: go! :D
 
Last edited:
Interesting points of views and references to proven facts guys. I'd like to partake a bit for own situation (see system specs in my sig): what would you do in my case for pure gaming without changing anything else (Intel + mobo are way overpriced right now and I don't really need either)? What GPU generation / model / VRAM configuration? Keeping in kind I plan on keeping the GPU for a few years.

Specs in brief:

EVGA 850W P2 PSU
Asus M10H board
8700K @ 4.9GHz
16GB 3333MHz
1440p G-Sync ISP display
Current GPU: Asus 1080 Strix

Everyone: go! :D

Now is a great time to pick up a cheap second hand 2080 Ti if you can afford, especially if you plan to stick at 1440p.
 
@AlienALX where can I read up on how good/bad Samsung 8nm is, especially compared to TSMC?

https://www.youtube.com/watch?v=tXb-8feWoOE&ab_channel=AdoredTV

Is the best place to start because he breaks it down into something pretty much any one who can listen will understand.

What I mean by that is you do need to listen and be able to take it in. Some people just can't seem to do that.

But in that he talks about the nodes, and how they compare to TSMC. Like, how many transistors 8nm Samsung will hold, how many TSMC 7 holds, wafer costs (the lower the nm the more the wafers cost) and so on.

Some of it is negative toward Nvidia. Like how they tried to screw TSMC into giving them lower prices by using Samsung as a weapon, but they failed because TSMC have more than enough work to tell them to get bent.

However, no matter how negative he can get sometimes he speaks the truth. Not always in his predictions, as he is sometimes fed false info, but on tech stuff he really knows what he is talking about.
 
Why can't Ryzen cope with the horse power, but Intel can? Especially seeing the massive popularity with Ryzen lately.

Let me explain that in a total non Intel shill fanboy way.

Firstly Intel can not cope with Ampere. I would bet my hat it is still bottle necking Ampere at lower resolutions. It just doesn't bottle neck as badly as AMD does at lower resolutions.

The reason for that? clock speed ! it's as simple as that. Gaming likes very high clock speeds, which AMD have not been able to deliver yet. Intel have, but only by going through about two million 14nm refreshes. On the actual die shrink they managed last (Broadwell E) clock speed fell off a cliff. Their Broadwell desktop CPUs were so bad they didn't even really launch them. Just some kind of paper launch with about 10 available for sale.

Ryzen is popular because for what it delivers and the technology it is far cheaper than Intel. Remember, price is king to 99% of gamers. The differences being pointed out on Intel? are not worth the outlay. The boards are more expensive, the CPUs run hotter so need after market cooling and so on.

However, to explain why this bottle necking is happening? Ampere is a tank.

For many years Intel made very big and powerful GPUs. Ones like the GTX 280 and then the GTX 480. But, during those years CUDA cores and sheer heft was pretty much useless for gaming. DX11 didn't care. So the only way to truly get the most out of a GTX 480 for example (over the Radeon 5870) was to crank up the FSAA. Because otherwise? the CPU would not be able to keep up with the sheer heft of Fermi's design.

Fermi was useless for gaming. It was about 3% faster than a 5870 on the same settings, which increased to about 8% when you cranked up the FSAA. Because that gave the card more work to do. Ampere is the same. If you don't feed those ridiculous amounts of CUDA cores they sit doing nothing. Hence why 4k is the only way to really leverage the power of Ampere.

After Turing (12nm TSMC) pretty much everything should have improved. More CUDA cores, lower power consumption, lower temps (resulting in higher clocks) and so on. That is usually the pay off for a decent die shrink on a good node.

Ampere achieved one of them, more CUDA cores in a smaller space. The power consumption is awful, the clock speeds are pretty much identical to Turing and often worse when pushed and the heat is crazy. And by heat you need to understand heat. Just because the core temps are in the 70s etc? that doesn't mean the heat is good. That cooler has to get rid of the waste heat (which in its defence it does quite well) but it's still using that whopping amount of power.

Technically? like, in engineering terms? Ampere is a complete failure. However, it will still have some use to gamers BUT where it really shines is when you can actually summon the power of those CUDA cores (rendering, it demolishes Turing at that). The problem is in gaming? it's very hard to give them all work.

As such a good 2080Ti will out clock a 3080 whilst using around the same amount of power, and at lower resolutions it will be faster as more of the GPU is able to be utilised.

Hopefully the 3070 will clock well. If it does? a 16gb version could be ace.

However that leads me to the last thing you should know about Ampere. It overclocks like total crap. And again, that is down to the poor node. Nvidia have already clocked its balls off, leaving pretty much nothing of note in the tank. If it were on a good node? it would overclock balls on top of the stock clocks.

Just like Pascal. None of this nonsense about "Scan tools are coming" and etc. From day one it was a frickin speed demon. Because it was a great core design on a fantastic node.

Turing can clock the same under water, but then you need to consider everything else going on with the Turing design. It now has tensor cores, RT cores and etc. So getting all of that to run at those higher clocks is much harder.

However, Pascal was useless for RT and Nvidia knew it. They even allowed you to run RT on it just to see how bad it was.
 
Back
Top