PCIe 4.0 devices are being shown off at IDF 2016

We also need to remember that storage is moving over to PCIe. In a few years NVMe may be the norm, we will need the extra bandwidth.

It also means that PCIe 4 CPUs and motherboards will be much more capable of SLI and CF. Remember that an x8 PCIe 4 lane is a X16 PCIe 3 lane.
 
Really.. This is clearly progress, were getting faster and it is "going" to be needed.

In 2025.

If we've not even maxed out PCIE3 yet with four Pascal Titan X then pray tell how PCIE4 is needed?

PCIE3 was released years too early also. It was only done because other than onboard USB3 and SATA 3 Intel's X79 platform offered nothing over X58. In fact, some X58 boards came with both.

My bet is we will see these PCIE4 boards come on new sockets for CPUs that are about 5-10% faster than what we have now, and the PCIE part will be the main selling strategy.

Obviously they're not going to move forward on DDR4 so this will give them the perfect sales pitch.

We also need to remember that storage is moving over to PCIe. In a few years NVMe may be the norm, we will need the extra bandwidth.

It also means that PCIe 4 CPUs and motherboards will be much more capable of SLI and CF. Remember that an x8 PCIe 4 lane is a X16 PCIe 3 lane.

Even enthusiasts are not on all PCIE storage dude. It's far too expensive for the boosts it gives a regular desktop user. Don't get me wrong, my RAIDR is much faster than a Sandisk (I can feel it, especially loading up things like Photoshop) but that doesn't mean I would switch to all PCIE storage as it costs far too much.

Now in server land? yeah, check out AMD's Zen board with loads of PCIE slots. It's just another server hand me down dude, with the usual "Move along, nothing to see here" sign hanging from it.
 
Last edited:
Even enthusiasts are not on all PCIE storage dude. It's far too expensive for the boosts it gives a regular desktop user. Don't get me wrong, my RAIDR is much faster than a Sandisk (I can feel it, especially loading up things like Photoshop) but that doesn't mean I would switch to all PCIE storage as it costs far too much.

Now in server land? yeah, check out AMD's Zen board with loads of PCIE slots. It's just another server hand me down dude, with the usual "Move along, nothing to see here" sign hanging from it.

Dude, I know that not many people are on PCIe storage. Not even TTL uses PCI storage on his main rig right now. I am just talking about moving forward.

Within the next year everyone will be releasing their 3DNAND to compete with Samsung and we have already seen that NVMe controllers are becoming much more easily available to make consumer drives.

Right now a lot of people buy X99 for the extra PCIe lanes, but with PCIe 4.0 even mainstream CPUs will have tonnes of bandwidth.

At earliest Intel will be adding PCIe 4.0 with Skylake-E and will certainly hit the enterprise market first.

The server market needs the extra bandwidth, so that is why this is being made, same with PCIe 3.0.

PCIe 4.o will be backwards compatible, so it really isn't an issue for consumers. Intel would make new sockets and boards anyway.
 
PCI-E slots should never be used to connect SSDs, that is just lazy and inefficient of intel.

Intel and the SSD manufacturers need to come up with a system that works like sata but much faster, using a very small connector into the motherboard.

Doing the above would mean new CPUs, motherboards and SSDs. This should not be difficult for intel as they release new boards/CPUs every 5 minutes anyway.

There is no reason why intel can not come up with a standard for SSDs that is many times faster than anything we have today.
 
PCI-E slots should never be used to connect SSDs, that is just lazy and inefficient of intel.

Intel and the SSD manufacturers need to come up with a system that works like sata but much faster, using a very small connector into the motherboard.

Doing the above would mean new CPUs, motherboards and SSDs. This should not be difficult for intel as they release new boards/CPUs every 5 minutes anyway.

There is no reason why intel can not come up with a standard for SSDs that is many times faster than anything we have today.

Either way, it requires Intel to dedicate silicon to something that is the "equivalent"to PCIe lanes that are dedicated to storage tasks.

It is the same as a lot of other problems, yes they could have made a whole new solution but it is much easier to use PCIe lanes. PCIe and SATA storage have been linked for years anyway so it was logical to use PCIe.

Intel has already moved with Skylake to separate PCIe for storage and PCIe for GPUs etc with the additional 4 lanes that were included in Skylake for storage. My guess is that with the next Intel X socket that they will have 40 lanes + 8 lanes for storage or other M.2/U.2 hardware.
 
I used RevoDrives for several years but they are so slow for booting up that I gave the things away in the end.

You don't notice the extra raw speed but you do notice the lengthy boot times.
 
I like the idea of being able to use x8 PCI-e 4.0 for your GPU and another x8 + x4 for SSD's. I'd love to see a 512GB x4 SSD for boot drive and programs, a 1TB x4 SSD for games, and a 2TB x4 SSD for storage files, all connected to the motherboard without annoying SATA cables or populating drive bays. Then you could just have an external 2TB mechanical drive for backup. You would have no need for drive bays ever again.
 
In 2025.

If we've not even maxed out PCIE3 yet with four Pascal Titan X then pray tell how PCIE4 is needed?

PCIE3 was released years too early also. It was only done because other than onboard USB3 and SATA 3 Intel's X79 platform offered nothing over X58. In fact, some X58 boards came with both.

My bet is we will see these PCIE4 boards come on new sockets for CPUs that are about 5-10% faster than what we have now, and the PCIE part will be the main selling strategy.

Obviously they're not going to move forward on DDR4 so this will give them the perfect sales pitch.



Even enthusiasts are not on all PCIE storage dude. It's far too expensive for the boosts it gives a regular desktop user. Don't get me wrong, my RAIDR is much faster than a Sandisk (I can feel it, especially loading up things like Photoshop) but that doesn't mean I would switch to all PCIE storage as it costs far too much.

Now in server land? yeah, check out AMD's Zen board with loads of PCIE slots. It's just another server hand me down dude, with the usual "Move along, nothing to see here" sign hanging from it.

Advancement and faster tech is a good thing and it will benefit us gaming and enthusiast types too.
 
This should help AMDs XDMA I'd think. It would just guarantee that there willbe no chance of XF being bandwidth limited.

What I think they should improve onis latency mostly here though. If latency is reduced between CPU and between GPUs, it should help smooth out the Multi GPU experience somewhat. And with adaptive sync, it would help reduce any stutter at all.
 
Meh still using PCI-E 2.0 here ^_^ My PC upgrade cycles are long though, Windows 3.1 in 1994, then an XP machine in 2003 then windows 7 in 2009.
 
Back
Top