I think a big part of the reason there's less of a difference in noticeable graphical fidelity now is because traditional raster graphics have kind of hit a wall and not really had much in the way of meaningful improvements for over half a decade. It's kinda the same reason why the resultant fidelity jump between each generation gets smaller and less noticeable at each one even with larger and larger absolute power increases. So many games nowadays look great at minimum settings and just slightly better at ultra because of this.
Quote:
but it does seem a very strange coincidence that stuff like Crossfire and SLi support seemed to fall off a cliff when the Xbone and PS4 came out. Why? because they are X86, so they had to do 0 to get it running on PC, compared to before where they had to do the work.
|
I think the drop in popularity in mGPU use that led to the drop in focus was a combination of the fact that truly large die GPUs had started to exist with Kepler(Titan) and GCN(Huwaii) so mGPU's "sensible price range" increased significantly beyond what most people spend on PCs(Combined with the removal of SLI headers on NVidia's mid end down cards) as well as all the research around 2012 on into frame timings and the issues of frame consistency and latency inherent to AFR setups that effectively killed its popularity for mid/low end cards as this is where those issues were exasperated, both these came before the current gen console launch and were already seen back then as the nail in the coffin for a fading technology(Though indeed there was more optimism regarding the viability of some alternatives to AFR for mGPU use back then, but these all seemed to fall short or require very specific games in practice).
Quote:
RTX is one tech available on a small amount of cards supplied by one manu. Once Intel enter the market RT will become even more of a hassle to make games for. And as I said, the last PC exclusive (because Control isn't even an exclusive even though it support RT) was Crysis and look how long ago that was.
|
I have to disagree with this one, when Intel and AMD bring gaming orientated raytracing support to GPUs it will be through the DXR API that they & NVidia developed exactly for this purpose and that all current raytracing games available use, Microsoft took extreme care with DXR to make it completely hardware agnostic, to the degree that you can run the API via CPU right now if you wanted, although you'd get a very slow slideshow in all but basic few polygon demos. There will need to be some new code paths without a doubt but this would be a fraction of the work of the full implementation and the similarities between Turing and RDNA now are quite various, as well as the overall approach to accelerating raytracing, and given the engineers leading Intel's GPU development it's unlikely their arch will be a world apart either, especially as it would hinder uptake so significantly with modern APIs fidelity.