At stock Pascal and Polaris die sizes & power consumptions are only about 15% apart. RX480 has a 230mm die, GTX1060 has a 200mm one. RX480 (stock) has a 150W TDP and you don't lose much performance by forcing the power target down to ~120W as you'd find in laptop variants, GTX1060 had a 120W TDP. Obviously because of 14LP's terrible clock vs power scailing curves, Polaris' efficiency dropped off much sharper as the clock speed increased, which is why you could also get 216W RX480/580's with only about 10-15% performance advantage over the 150W models, but much of that is just positioning it as an all out gaming card and actively sacrificing efficiency to do so.
Edit:
Zen was a risk worth taking. Not only does it appeal to gamers it also appeals to every one else. It allowed them to create a massive product stack. I don't see what part luck had to do with it.
GPU on the other hand are expensive to design and execute and the market for those is arguably smaller too.
This is no longer true. x86 CPUs are still by far the most complex & expensive type of processing chip you could design because even if they're relatively small they have an extremely high variety of execution units and a far greater per-mm^2 complexity(And a whole mess of legacy & licensing consideration that makes the cost against an ARM CPU several orders of magnitude higher). GPUs might be large but generally that's because they use several thousand identical examples of one fairly small core containing only a few types of execution unit.
Meanwhile, GPUs have quickly grown into the most in-demand market there is in computing really and is expected to continue on this path of rapid growth as AI continues to develop further, GPUs are now just as versatile as CPUs in many ways and in far higher demand in many enterprise & research areas.
To not throw all their resources behind catching up on GPUs now would be suicide for AMD. Everyone, including investors at this point, and obviously Intel, knows that GPUs are where the money is going forward. AMD can not survive as an x86 CPU company with mediocre GPUs, they will just end up back to the Piledriver/GCN days but in reverse, and they can't take that kind of hit twice in a decade.
There's a reason why AMD's R&D budget has ballooned over the last few years while the Radeon group has been significantly restructured with new leadership.
Yes, NVidia held Volta from consumers for around 8 months, but that's mostly because it wasn't yet viable economically even for a Titan device. Since Volta, NVidia has been at the absolute edge of what is possible with current nodes, if it's physically impossible to create a larger device and they don't yet have a new architecture or a new node to use(Even if they used 7nm there's no indication anyone could actually make such large devices on it yet) then they are at the physical limits of what they can do, period.