PCI-SIG plans to update us on PCIe 6.0 this week

As you said we won't see this for a while but I'm sure HPC/Datacenter markets are itching for something like this.

Bandwidth is a big deal for this market. There's a reason why Nvidia created NVLink and AMD created Infinity Fabric Link for their datacenter GPUs. PCI was stagnant for too long after PCIe 3.0 released.

Now PCI-SIG are moving as fast as they can to prevent themselves from becoming obsolete. Bandwidth is a big deal in the HPC market, be it in supercomputers, datacenters or in the world of AI compute. TBH, I wouldn't be surprised if PCIe 7.0 wasn't revealed (as in being worked on) before 2023.
 
Bandwidth is a big deal for this market. There's a reason why Nvidia created NVLink and AMD created Infinity Fabric Link for their datacenter GPUs. PCI was stagnant for too long after PCIe 3.0 released.

Now PCI-SIG are moving as fast as they can to prevent themselves from becoming obsolete. Bandwidth is a big deal in the HPC market, be it in supercomputers, datacenters or in the world of AI compute. TBH, I wouldn't be surprised if PCIe 7.0 wasn't revealed (as in being worked on) before 2023.

See InfinityFabric (aka HyperTransport on steroids) and PCIe have a bit different purpose , one is to give super fast low latency access to multiple low latency small requests like CPU issuing commands to GPU or moving instructions from the overloaded cache to the RAM and back , the other PCIe is more universal offering stuff like hot plugging swtiching and bandwidth control (think of it a bit more like a ethernet network) so Infinity Fabric is more as an fixed network that don't gives the ability to plug in very different devices and has much smaller packets in order to have low latency which means it has more packets overhead for creating each packet and sending lots of packets data instead of usefull data , which leads to slower usefull data tranfser speeds over the existing physical layer (which can be actually the physical layer of PCIe) , PCIe on the other hand is using big packet size which means lower packet overhead and more data transfer but higher latency.

Also InfinityFabric has extra controll lanes called Scalable Control Fabric which is in charge of monitoring and controlling different dies thermals , clock , power and many other low level aspects of a single die.

So as they are similar IF and PCIe they are also and very different and can't fully overlap each other which means they can't be true competitors to each other.
 
So the TX/RX lanes are switching at a minimum of 32GHz. Gonna have some major problems with trace routing. PCIe4.0 already suffers pretty badly from attenuation and crosstalk over distance this will be even worse.
 
Back
Top