Don't forget that in addition your write up that the scaling of voltage and current are reaching their limits and as they do, current leak grows which provides more heat to the CPU and reduces performance, or at least that's how I understand it so far. Not that far into my physics courses and haven't hit the required laws of physics to fully understand what happens at a transistor level.
With current architecture you need exponentially more voltage for higher clocks, as stated. The only problem is the heat. More current trough wire equals more heat. If you can remove it you can scale to the physical limit of the components. You can see pro overclockers reaching insane clocks just buy removing heat with LN2 and providing enough current. After a certain point the amount of current required to go one step higher in clocks becomes very high, and not practical at all. So you can go up, but it is just not viable. That is the limitation, and the way that current technology works.
What is happening is that technology in current CPUs was developed a decade ago or more, and wasn't meant to go this high. They just refined it. But now they can't refine it further more.
Theoretically intel can reach 10Ghz in their labs with current technology but it will require liquid helium cooling, and insane amount of current. That is just not viable for consumer use. So they went and optimized other things to improve performance, rather than go for clock only.
You can also reduce current requirements with going to smaller fabrication process but that can also scale to the point. Because again that old architecture cannot function at small scale, because it was not designed for it. You can compensate for that but again to the certain point.
Main problem is that we are reaching absolute limit of that architecture that was designed so long ago, and it can't be improved any more. We need completely new design of CPU microarchitecture that is probably way in development, but not ready for consumers. That is why we see core increase. Coffee Lake is the perfect example. Intel did improve things from Sandy Bridge to Kaby Lake. Just a constant stream of improvements from generation to generation. But then they couldn't upgrade Kaby lake anymore so they've put 2 more cores on it and called it an upgrade.
Ice Lake will probably be the last hurrah for current architecture, with maybe a last dying breath with revision after Ice Lake. Then things need to change drastically for bugger steps in performance.