Why can't Ryzen cope with the horse power, but Intel can? Especially seeing the massive popularity with Ryzen lately.
Let me explain that in a total non Intel shill fanboy way.
Firstly Intel can not cope with Ampere. I would bet my hat it is still bottle necking Ampere at lower resolutions. It just doesn't bottle neck as badly as AMD does at lower resolutions.
The reason for that? clock speed ! it's as simple as that. Gaming likes very high clock speeds, which AMD have not been able to deliver yet. Intel have, but only by going through about two million 14nm refreshes. On the actual die shrink they managed last (Broadwell E) clock speed fell off a cliff. Their Broadwell desktop CPUs were so bad they didn't even really launch them. Just some kind of paper launch with about 10 available for sale.
Ryzen is popular because for what it delivers and the technology it is far cheaper than Intel. Remember, price is king to 99% of gamers. The differences being pointed out on Intel? are not worth the outlay. The boards are more expensive, the CPUs run hotter so need after market cooling and so on.
However, to explain why this bottle necking is happening? Ampere is a tank.
For many years Intel made very big and powerful GPUs. Ones like the GTX 280 and then the GTX 480. But, during those years CUDA cores and sheer heft was pretty much useless for gaming. DX11 didn't care. So the only way to truly get the most out of a GTX 480 for example (over the Radeon 5870) was to crank up the FSAA. Because otherwise? the CPU would not be able to keep up with the sheer heft of Fermi's design.
Fermi was useless for gaming. It was about 3% faster than a 5870 on the same settings, which increased to about 8% when you cranked up the FSAA. Because that gave the card more work to do. Ampere is the same. If you don't feed those ridiculous amounts of CUDA cores they sit doing nothing. Hence why 4k is the only way to really leverage the power of Ampere.
After Turing (12nm TSMC) pretty much everything should have improved. More CUDA cores, lower power consumption, lower temps (resulting in higher clocks) and so on. That is usually the pay off for a decent die shrink on a good node.
Ampere achieved one of them, more CUDA cores in a smaller space. The power consumption is awful, the clock speeds are pretty much identical to Turing and often worse when pushed and the heat is crazy. And by heat you need to understand heat. Just because the core temps are in the 70s etc? that doesn't mean the heat is good. That cooler has to get rid of the waste heat (which in its defence it does quite well) but it's still using that whopping amount of power.
Technically? like, in engineering terms? Ampere is a complete failure. However, it will still have some use to gamers BUT where it really shines is when you can actually summon the power of those CUDA cores (rendering, it demolishes Turing at that). The problem is in gaming? it's very hard to give them all work.
As such a good 2080Ti will out clock a 3080 whilst using around the same amount of power, and at lower resolutions it will be faster as more of the GPU is able to be utilised.
Hopefully the 3070 will clock well. If it does? a 16gb version could be ace.
However that leads me to the last thing you should know about Ampere. It overclocks like total crap. And again, that is down to the poor node. Nvidia have already clocked its balls off, leaving pretty much nothing of note in the tank. If it were on a good node? it would overclock balls on top of the stock clocks.
Just like Pascal. None of this nonsense about "Scan tools are coming" and etc. From day one it was a frickin speed demon. Because it was a great core design on a fantastic node.
Turing can clock the same under water, but then you need to consider everything else going on with the Turing design. It now has tensor cores, RT cores and etc. So getting all of that to run at those higher clocks is much harder.
However, Pascal was useless for RT and Nvidia knew it. They even allowed you to run RT on it just to see how bad it was.