Nope, and that justification makes no sense anyway(A core can be anything you want it to be, for all intents and purposes there are thousands of distinct cores in a GPU, it's quite rare to refer to a die as a core unless the die only has a single core, but there isn't really such thing as a single core GPU beyond like tiny 2D mobile designs, they're inherently highly parallel devices).
If a multi-die GPU is configured to work across a single workload, without the need for a cluster-API or similar to act as an interface between distinct GPU interfaces, then it is acting & interfacing as a single processing unit, hence it is generally referred to as a single Graphics Processing Unit. This is in contrast to multi-GPU gaming configurations that almost always have two independent but identical workloads running in relative sync.
This is a really active area of research, and an area that will have a lot of consumer interest over the years, it's imperative we don't use confusing language like multi-GPU when referring to hardware that will not only not be multi-GPU in any way as far as any software is concerned, besides nUMA awareness, but for all intents and purposes will be a single external package too. The inner workings of a multi-die single-interposer nUMA aware GPU is nothing like that of anything referred to as multi-GPU.
PCIe5 & NVLink are useful for clustered GPGPU setups but at the moment they require a clustering API and other software for this interface and don't have nearly the bandwidth required to do graphics computations in a clustered fashion, the amount of data transfer for that is greater than even current Infinity Fabric bandwidth. It will be far longer after we have multi-die single interposer GPUs until we have external links capable of the same, likely won't be possible till we have consumer optronics.
But regardless, none of this is useful for the mGPU software stack(PC or otherwise), because the whole concept is built around not requiring a different software stack, as is the case with current mGPU setups, if anything multi-die GPUs have a higher chance of killing traditional mGPU software interfaces/approaches like CFX/SLI as the economics of dual card setups for larger arrays of stream cores becomes uneconomical compared to just single cards with large interposers.
Yeah, this. AFR based multi-GPU is fundamentally detrimental for input latency compared to single larger GPU setups.