Timeline for RTX 30 series launch detailed - RTX 3090/3080 are Nvidia's working title

I'd have thought they'd want these to be available for use with the Cyberpunk 2077 release, TBH.


..Seeing that run at 4K with all the Ray Tracing effects cranked up at good frame rates will be all that's needed for many people to lay down the cash.


Nvidia will want to rake in the cash from Founders Edition sales in Q4 2020.



..and I can't see widespread AIB card availability until Q1 2021, unfortunately.
 
I'd have thought they'd want these to be available for use with the Cyberpunk 2077 release, TBH.


..Seeing that run at 4K with all the Ray Tracing effects cranked up at good frame rates will be all that's needed for many people to lay down the cash.


Nvidia will want to rake in the cash from Founders Edition sales in Q4 2020.



..and I can't see widespread AIB card availability until Q1 2021, unfortunately.

not having AIB available is less profit for Nvidia. Can guarantee their FE cards are already sold to every manufacturer wanting to box them up. Nvidia don't make their cards. Its still outsourced. They don't want to sell you cards directly because margins are so low. Part of the reason you only get to buy 2 cards from their website. Has nothing to do with miners. The margin is far lower than selling the chips to 3rd parties like ASUS/EVGA etc.

Its Ngreedia after all, owning a manufacturing plant is expensive so palming off all those costs elsewhere is win win for them.

Times are changing of course,
 
I am not worried about them. They will sell every flagship card the moment they come to the market.
 
So rumour has it the reason the fans are on opposite sides of the PCB is that Nvidia is sort of doing a 2.5D stack in that the normal core is on the normal side and the co processor for RT/AI is directly behind it.

If this is the case it's quite a drastic and I can see why Nvidia are mad someone leaked it.
 
Last edited:
Besides the fact that 2nd fan wouldn't provide any meaningful additional cooling to the backplate, the penalties, both cost and performance wise, of physically separating the RT and AI cores, would far outweigh any gains. You'd cap the bandwidth between the shader units and the RT units to a fraction of what's possible currently and orders of magnitudes off what AMD's hybrid RT architecture will allow. All the research and software on the RT side of things indicates both the performance gains and the cost reductions have to come from further integration with the traditional shader units, particularly with regards to their cache and memory access, to negate the silicon penalty as AMD have shown is possible with their 2017 patents, so that much more of the silicon can be usable for both traditional and RT perf.

Imo, if this were true, it would be both a step back in terms of being incredibly limiting for programmers, and a huge pain in the a' to implement something with performance characteristics so wildly different(And in many cases, it would have to be objectively worse, because of how bandwidth and latency sensitive these workloads are) from the consoles/AMD.
 
Nvidia have already filed a patent for it and I'm pretty sure they know more than you do.
Patent for RayTracing (& AI?) co-processors? Provide your sources then, if it's missed me I'd love to see it.

I assume you're not just going to repost the unrelated 2017 patent on MCM GPUs we've discussed here many times before?
 
Last edited:
I've found the patent one YouTuber has used as the basis for these theories:
http://www.freepatentsonline.com/20200050451.pdf

However, it seems they got a little carried away, this patent is essentially a description of the relationship between the BVH acceleration units and the SM's with Turing(And one would assume possible future architectures but that is never set in stone), and describes the link between the SM's(Which on Page 17 NVidia's states comes under their definition of a "microprocessor" for this patent), and the "coprocessors", which NVidia states for this patent is any unit external to the SM(IE not using the same ISA as the SM).

Essentially it seems someone has made the assumption these coprocessors are discrete chips, without reading the page where it explicitly states these definitions are purely logical and not physical, and missing the part where it clearly describes that each co-processor is linked to an SM, which of course is how Turing's BVH acceleration units are allocated, and would of course imply thousands of these "coprocessors"(As described in this patent) per GPU.

Heck, the image that is doing the rounds this person has used to imply this is a discrete co-processor, literally explicitly implies the co-processor in question is in fact within the GPU, and not an external chip(s), unlike the memory (Are they confusing the diagram of the data structures within the memory as a 2nd unit?). This diagram would apply to Turing perfectly:
traversal-coprocessor-diagram-nvidia-1030x708.png
 
Last edited:
Why is that? Wouldn't that mean you need two blocks, one for each side?

Well we buy fancy beautiful blocks, or coolers only to have them face down, and we are stuck looking at nothing more than a back plate in the case.

The only ones who benefit on current GPU are those who mount their card vertically so the block with its channels and lighting are visible.
 
Back
Top