Go Back   OC3D Forums > [OC3D] General Forums > OC3D Reviews & Videos
Reply
 
Thread Tools Display Modes
 
  #91  
Old 20-09-20, 12:59 AM
Avet's Avatar
Avet Avet is offline
OC3D Elite
 
Join Date: Dec 2016
Posts: 1,642
Quote:
Originally Posted by Dawelio View Post
Wait what? ... Have I been living under a rock or what, since this just sounded like a whole new newsbomb to me.

Why can't Ryzen cope with the horse power, but Intel can? Especially seeing the massive popularity with Ryzen lately. Seems a bit odd and illogical if they can't cope with 3080 and people still buying Ryzen.

Rocket Lake, when will that be released exactly? And won't that require and entirely new upgrade like always? CPU and motherboard?
Intel beating AMD in gaming:
https://www.techpowerup.com/review/n...-intel-10900k/

Intel beating AMD in productivity. Yes AMD beats intel in rendering tasks but you don't render with CPU you render with GPU using CUDA and OptiX. Observe General Actions. It will be the same in all other AutoDesk apps, Blender, Solidworks...
https://www.pugetsystems.com/labs/ar...ActionsResults

Clear bottleneck by AMD CPUs:
https://youtu.be/0DKVVtirNM8

More myths busted:
https://youtu.be/1kK6CBJdmug

It is just AMD hype, "PCI-E Gen 4", a bit better pricing, and misrepresented rendering benchmarks that make AMD CPUs look better in apps like Blender where Intel actually beats AMD in viewport performance and general tasks.

Intel's mainstream CPUs are better for gaming and all but few production tasks (Adobe Premiere).

Threadripper is king for heavy-duty stuff, but X570 is behind Intel in pretty much everything.

Reply With Quote
  #92  
Old 20-09-20, 01:10 AM
AlienALX's Avatar
AlienALX AlienALX is offline
OC3D Elite
 
Join Date: Mar 2015
Location: West Sussex
Posts: 14,963
Quote:
Originally Posted by Dawelio View Post
Well that was a mouthfull... What does everyone else in here think?

Do you all agree with Alien on his points. Now I'm not talking regarding his beliefs or so, but the raw facts and points of his post. Regarding Ampere and it's 1440p as well as 4K.
They are not points. They are facts. Seriously if you want to learn you need to educate yourself on how GPU technology works dude.



Is a good way to start, and to understand why Ampere is what it is.

As for Intel being "better" than AMD? at 1080p and now 1440p they are for Ampere and Ampere only. In gaming only, as they still get their asses whooped at anything else.

And just because Intel are better at those resolutions for Ampere it does not mean they are the best. It just means that for gaming they bottleneck the 3080 less.

What we need now are for Intel and AMD (I predict the latter *cough October*) to step up and start making CPUs that match.

Until then unless you are a 4k die hard avoid Ampere for now at least. Even with the 20gb models they will still be nowhere near as good as Nvidia back on TSMC with a shrink. IE - take a lot of the decent Youtuber's advice and if you have a 2080Ti or Super skip this round, wait for AMD and then make a better more informed decision.

If it were 6 months or more? I would tell you to grab one. But one month (and let's face it it will be longer than that before you can get an Ampere any way) is just daft not to wait.

I said before how it drives me nuts when people buy GPUs not suited to the task they are buying them for. If you use 4k? 3080. However, it comes with caveats like a very questionable amount of VRAM for next gen titles. Remember, next gen titles. They'll be nothing like you have seen yet.
__________________


If you don't like what I post don't read it.
Reply With Quote
  #93  
Old 20-09-20, 01:55 AM
KingNosser KingNosser is offline
Advanced Member
 
Join Date: Dec 2010
Location: Essex UK
Posts: 358
Well to anyone thinking about the 3090, i'd wait for reviews, idk if the leaked benchmarks coming out are right or not but 10% doesn't seem worth it for £1500 unless you really need that vram.

As for Intel they are going to be in for a shock on the 8th
__________________
https://forum.overclock3d.net/attachments/signaturepics/sigpic20429_1.gif
Reply With Quote
  #94  
Old 20-09-20, 04:46 AM
Kaapstad's Avatar
Kaapstad Kaapstad is offline
OC3D Elite
 
Join Date: Jul 2013
Location: Skaro visiting family
Posts: 2,011
Quote:
Originally Posted by KingNosser View Post
Well to anyone thinking about the 3090, i'd wait for reviews, idk if the leaked benchmarks coming out are right or not but 10% doesn't seem worth it for £1500 unless you really need that vram.

As for Intel they are going to be in for a shock on the 8th
I did look at that claim about 10% and figures look dodgy.

I think the 3090 will come in about 20% faster than a 3080 @2160p, lower resolutions will be CPU bottlenecked.

The question being is 20% faster enough for the very high price tag?
__________________
OC3D Overclockers Club Member
#041

GTX 960 owner and proud of it.
Reply With Quote
  #95  
Old 20-09-20, 05:20 AM
NeverBackDown NeverBackDown is online now
AMD Enthusiast
 
Join Date: Dec 2012
Posts: 17,413
Nvidia did say and market it that the 3080 is their flagship card. The 3090 is more for prosumer use and above where it's memory is useful.

So for gaming alone it's not worth it.
Reply With Quote
  #96  
Old 20-09-20, 10:59 AM
trawetSluaP's Avatar
trawetSluaP trawetSluaP is offline
OC3D Crew
 
Join Date: Apr 2014
Posts: 848
Quote:
Originally Posted by Kaapstad View Post
I did look at that claim about 10% and figures look dodgy.

I think the 3090 will come in about 20% faster than a 3080 @2160p, lower resolutions will be CPU bottlenecked.

The question being is 20% faster enough for the very high price tag?
Also have to take the 24GB of memory into account too plus the much larger cooler.

On other points raised, kind feel people need to stop giving NV a hard time about this launch and blame the scalpers and people using bots (one person got 42 cards using a bot program apparently).

Also, 10GB is more than enough VRAM. Unfortunately a lot of people will look at readings shown by GPUz or other similar programs, see close to 11GB usage (for a 2080Ti) and assume this is the memory being used without understanding the difference in use of the memory and it just be allocated/requested. You also have to take RTX IO into account which should also help.
__________________
Enthoo Elite | i9-7900X | Rampage VI Extreme | Dominator Platinum 32GB | RTX 2080 FE | 960 Pro 512GB, 840 Evo 250GB x 2 | EVGA SuperNOVA 1200 P2 | HP Omen 27i, MX27AQ | Custom Watercooling Loop + Aquaero 6 XT

Reply With Quote
  #97  
Old 20-09-20, 11:08 AM
TheF34RChannel's Avatar
TheF34RChannel TheF34RChannel is offline
OC3D Elite
 
Join Date: Mar 2011
Location: The Netherlands
Posts: 2,180
@AlienALX where can I read up on how good/bad Samsung 8nm is, especially compared to TSMC?

Interesting points of views and references to proven facts guys. I'd like to partake a bit for own situation (see system specs in my sig): what would you do in my case for pure gaming without changing anything else (Intel + mobo are way overpriced right now and I don't really need either)? What GPU generation / model / VRAM configuration? Keeping in kind I plan on keeping the GPU for a few years.

Specs in brief:

EVGA 850W P2 PSU
Asus M10H board
8700K @ 4.9GHz
16GB 3333MHz
1440p G-Sync ISP display
Current GPU: Asus 1080 Strix

Edit: I only buy new, no 2nd hand. And the new features like IO and DLSS 2.0 may also factor in because you're also buying into new features with the new future products.

Everyone: go!
__________________
Asus Z370 MXH | 8700K 4.9GHz | Corsair H105 | G.Skill Trident Z 3333MHz CL16 16GB | Nvidia RTX 3080 | EVGA SuperNOVA 850 P2 | Samsung 970 EVO Plus 1TB NVMe | Acer XB270HU 1440p | Win10 Pro x64
We call ourselves legion, for many we be. So hand me your kingdom, I'll take it from here. And dare no one break from, my circles of fear

Reply With Quote
  #98  
Old 20-09-20, 11:27 AM
trawetSluaP's Avatar
trawetSluaP trawetSluaP is offline
OC3D Crew
 
Join Date: Apr 2014
Posts: 848
Quote:
Originally Posted by TheF34RChannel View Post
Interesting points of views and references to proven facts guys. I'd like to partake a bit for own situation (see system specs in my sig): what would you do in my case for pure gaming without changing anything else (Intel + mobo are way overpriced right now and I don't really need either)? What GPU generation / model / VRAM configuration? Keeping in kind I plan on keeping the GPU for a few years.

Specs in brief:

EVGA 850W P2 PSU
Asus M10H board
8700K @ 4.9GHz
16GB 3333MHz
1440p G-Sync ISP display
Current GPU: Asus 1080 Strix

Everyone: go!
Now is a great time to pick up a cheap second hand 2080 Ti if you can afford, especially if you plan to stick at 1440p.
__________________
Enthoo Elite | i9-7900X | Rampage VI Extreme | Dominator Platinum 32GB | RTX 2080 FE | 960 Pro 512GB, 840 Evo 250GB x 2 | EVGA SuperNOVA 1200 P2 | HP Omen 27i, MX27AQ | Custom Watercooling Loop + Aquaero 6 XT

Reply With Quote
  #99  
Old 20-09-20, 01:05 PM
AlienALX's Avatar
AlienALX AlienALX is offline
OC3D Elite
 
Join Date: Mar 2015
Location: West Sussex
Posts: 14,963
Quote:
Originally Posted by TheF34RChannel View Post
@AlienALX where can I read up on how good/bad Samsung 8nm is, especially compared to TSMC?


Is the best place to start because he breaks it down into something pretty much any one who can listen will understand.

What I mean by that is you do need to listen and be able to take it in. Some people just can't seem to do that.

But in that he talks about the nodes, and how they compare to TSMC. Like, how many transistors 8nm Samsung will hold, how many TSMC 7 holds, wafer costs (the lower the nm the more the wafers cost) and so on.

Some of it is negative toward Nvidia. Like how they tried to screw TSMC into giving them lower prices by using Samsung as a weapon, but they failed because TSMC have more than enough work to tell them to get bent.

However, no matter how negative he can get sometimes he speaks the truth. Not always in his predictions, as he is sometimes fed false info, but on tech stuff he really knows what he is talking about.
__________________


If you don't like what I post don't read it.
Reply With Quote
  #100  
Old 20-09-20, 01:30 PM
AlienALX's Avatar
AlienALX AlienALX is offline
OC3D Elite
 
Join Date: Mar 2015
Location: West Sussex
Posts: 14,963
Quote:
Originally Posted by Dawelio View Post
Why can't Ryzen cope with the horse power, but Intel can? Especially seeing the massive popularity with Ryzen lately.
Let me explain that in a total non Intel shill fanboy way.

Firstly Intel can not cope with Ampere. I would bet my hat it is still bottle necking Ampere at lower resolutions. It just doesn't bottle neck as badly as AMD does at lower resolutions.

The reason for that? clock speed ! it's as simple as that. Gaming likes very high clock speeds, which AMD have not been able to deliver yet. Intel have, but only by going through about two million 14nm refreshes. On the actual die shrink they managed last (Broadwell E) clock speed fell off a cliff. Their Broadwell desktop CPUs were so bad they didn't even really launch them. Just some kind of paper launch with about 10 available for sale.

Ryzen is popular because for what it delivers and the technology it is far cheaper than Intel. Remember, price is king to 99% of gamers. The differences being pointed out on Intel? are not worth the outlay. The boards are more expensive, the CPUs run hotter so need after market cooling and so on.

However, to explain why this bottle necking is happening? Ampere is a tank.

For many years Intel made very big and powerful GPUs. Ones like the GTX 280 and then the GTX 480. But, during those years CUDA cores and sheer heft was pretty much useless for gaming. DX11 didn't care. So the only way to truly get the most out of a GTX 480 for example (over the Radeon 5870) was to crank up the FSAA. Because otherwise? the CPU would not be able to keep up with the sheer heft of Fermi's design.

Fermi was useless for gaming. It was about 3% faster than a 5870 on the same settings, which increased to about 8% when you cranked up the FSAA. Because that gave the card more work to do. Ampere is the same. If you don't feed those ridiculous amounts of CUDA cores they sit doing nothing. Hence why 4k is the only way to really leverage the power of Ampere.

After Turing (12nm TSMC) pretty much everything should have improved. More CUDA cores, lower power consumption, lower temps (resulting in higher clocks) and so on. That is usually the pay off for a decent die shrink on a good node.

Ampere achieved one of them, more CUDA cores in a smaller space. The power consumption is awful, the clock speeds are pretty much identical to Turing and often worse when pushed and the heat is crazy. And by heat you need to understand heat. Just because the core temps are in the 70s etc? that doesn't mean the heat is good. That cooler has to get rid of the waste heat (which in its defence it does quite well) but it's still using that whopping amount of power.

Technically? like, in engineering terms? Ampere is a complete failure. However, it will still have some use to gamers BUT where it really shines is when you can actually summon the power of those CUDA cores (rendering, it demolishes Turing at that). The problem is in gaming? it's very hard to give them all work.

As such a good 2080Ti will out clock a 3080 whilst using around the same amount of power, and at lower resolutions it will be faster as more of the GPU is able to be utilised.

Hopefully the 3070 will clock well. If it does? a 16gb version could be ace.

However that leads me to the last thing you should know about Ampere. It overclocks like total crap. And again, that is down to the poor node. Nvidia have already clocked its balls off, leaving pretty much nothing of note in the tank. If it were on a good node? it would overclock balls on top of the stock clocks.

Just like Pascal. None of this nonsense about "Scan tools are coming" and etc. From day one it was a frickin speed demon. Because it was a great core design on a fantastic node.

Turing can clock the same under water, but then you need to consider everything else going on with the Turing design. It now has tensor cores, RT cores and etc. So getting all of that to run at those higher clocks is much harder.

However, Pascal was useless for RT and Nvidia knew it. They even allowed you to run RT on it just to see how bad it was.
__________________


If you don't like what I post don't read it.
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump










All times are GMT. The time now is 08:22 PM.
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2020, vBulletin Solutions, Inc.