Pascal Rumours Who's Excited

I'm planning on getting a 2560 x 1440p Gsync monitor before the year ends, and keep it a single card configuration. I like max settings but when possible and when they make an actual difference. I will never buy a titan card (I fail to see their point of existence for me personally), so I can understand if I have to make do with somewhat lesser settings here and there in the future, and that is okay.

I find most cards overpriced at launch, it's the early adopter premium, I think. So I will wait for prices to settle somewhat before buying Pascal. Plus, we get 21% tax here *ouch* and sadly there aren't real proper offers on as compared to eg overclockers. My 970 is only 2.5 months old so I can wait for them prices to come down a bit.
Yeah, same here in Ireland. High VAT rates.

For 2560x1440p, the x80 or x70 should be enough, based on what we've seen from the past. Unless games suddenly take another lunge forward graphically, the x70 should be about a 980ti performance, which is more than enough currently for 1440p, while the x80 will be more like a heavily overclocked 980ti or SLI 970. That's my guesstimate. As mentioned, though, it's all speculation.
 
How can we know? It's not even out lol:p

Only thing we do know about HBM2 is that the speeds are doubled so you wouldn't be wrong in thinking it'll be 1000mhz since current one is 500mhz. Unless it's still 500mhz but capable of having an I/O of 2 inputs and 2 outputs per cycle. I think it'll be the former though.

True, was hoping to get some more insight into HBM2 itself. It's specs on the the cards will remain a mystery until they're finalised and leaked (as always).

1GHz (2GHz effectively) sounds nice :)


Yeah, same here in Ireland. High VAT rates.

For 2560x1440p, the x80 or x70 should be enough, based on what we've seen from the past. Unless games suddenly take another lunge forward graphically, the x70 should be about a 980ti performance, which is more than enough currently for 1440p, while the x80 will be more like a heavily overclocked 980ti or SLI 970. That's my guesstimate. As mentioned, though, it's all speculation.

Yeah I'll pick an x80 or x70 price accordingly when they're launched. But I'm really hoping for the x80 variant :) I wonder what naming scheme they'll start with Pascal.

Sure, but speculating is fun, isn't it! I think there was speak of lowering the tax again back to 19% (as before), but I can't be sure - yet certainly hope so!
 
Last edited:
True, was hoping to get some more insight into HBM2 itself. It's specs on the the cards will remain a mystery until they're finalised and leaked (as always).

1GHz (2GHz effectively) sounds nice :)




Yeah I'll pick an x80 or x70 price accordingly when they're launched. But I'm really hoping for the x80 variant :) I wonder what naming scheme they'll start with Pascal.

Sure, but speculating is fun, isn't it! I think there was speak of lowering the tax again back to 19% (as before), but I can't be sure - yet certainly hope so!
I doubt lowering the taxes will happen in Ireland. If anything, it'll go up. One can hope, though. ;)

I personally am waiting to see what AMD do with Arctic Islands. I'm tied in somewhat to a FreeSync panel at the moment, but I really want to see whether they can claw back some of their market share. The Fiji line-up was not a success, IMO, but I don't believe they were banking on it. This suggests they are hoping their Zen architecture and Arctic Islands GPU's with the 16nm process will totally supplant the Fury X. I doubt it'll be superior to Pascal, but it doesn't need to. If they can reduce the cost of their top-tier models, we may have another 290X vs 780ti situation, with those wanting great performance on a budget opting for the 290X and forgoing the extra performance and benchmarking potential of the 780ti. I think Pascal will still be the most powerful, still possess better overclocking, still be more efficient, still have better driver support, but AMD will simply be cheaper. That's what I'm hoping for.
 
Couldn't agree more, and it's my hope as well that AMD will be more competitive next round, which is good for both camps. That's the downside to having either a Freesync or Gsync panel, it kinda makes you bound to the respective brand.
 
Couldn't agree more, and it's my hope as well that AMD will be more competitive next round, which is good for both camps. That's the downside to having either a Freesync or Gsync panel, it kinda makes you bound to the respective brand.

AMD did amazing this gen so far. It's really only the Fury X and the 980ti that things become less clear of who's winning. Below that though and AMD beats out Nvidia at every price point. What the jump to 14/16nm needs to do for AMD is help out there power consumption, because PC gamers for some reason care about that when most buy 800+ psus, while getting a massive leap ahead in performance. They will get both no doubt however I feel if they can reduce consumpion by about 50 watts at every tier then maximize performance, they will be onto a winner.

Freesync is less limited, Intel will be jumping on board probably with Kaby Lake. For budget gamers it would be pretty neat to get budget FS monitors, which are already available. My wish is that FS becomes more common on TV sets too. Could be useful for some of the console games that do hit 60hz and help minimize TVs really high input lag. Heck even on normal 60hz mode for sport watching it would be a pretty big jump.
 
AMD's rebranding was actually a good idea, at least for the consumers. If you bought a 290X when it first came out two years ago, the chip would still be going strong due to the 390X being one of AMD's biggest sellers. It pays to continue supporting the 290X because the 390X is now one of their flagships.
 
Not sure how accurate/new this is:

'Nvidia’s next-gen ‘Pascal’ graphics cards will get 16GB of HBM2 memory'

'At the GPU Technology Conference in Japan, Nvidia Corp. once again revealed key features of its next-generation graphics processing architecture code-named “Pascal”. As a it appears, the company has slightly changed its plans concerning memory capacity supported by its upcoming GPUs.

As expected, Nvidia’s high-end graphics processor that belongs to the “Pascal” family will feature an all-new architecture with a number of exclusive innovations, including mixed precision (for the first time Nvidia’s stream processors will support FP16, FP32 and FP64 precision), NVLink interconnection technology for supercomputers and multi-GPU configurations, unified memory addressing as well as support for second-generation high-bandwidth memory (HBM generation 2).

Based on a slide that Nvidia demonstrated at the GTC Japan 2015, next-generation high-end graphics cards with “Pascal” GPUs will sport up to 16GB of HBM2 with up to 1TB/s bandwidth. Previously Nvidia expected select solutions with “Pascal” graphics processors to feature up to 32GB of HBM2.'

http://www.kitguru.net/components/g...-graphics-cards-will-get-16gb-of-hbm2-memory/
 
I called the 16GB limit;)

Basically everything else that they showed in the slide we already knew from rumors. The die shrink must have saved them a ton of space as supporting Fp16/32/64 would require a lot of space.
 
I called the 16GB limit;)

Basically everything else that they showed in the slide we already knew from rumors. The die shrink must have saved them a ton of space as supporting Fp16/32/64 would require a lot of space.

You did! The article reads as though we might get 16GB on the consumer cards after all? That can't be.
 
It'll probably be exclusively on the new Titan and the Ti below it. Everything else would be 8GB I would guess, which is more than enough

Yeah that would make the most sense. We don't need 16GB - well I don't :D I still think the Ti might have 12GB to set it apart from the regular x80 and a titan. Anyway, looks like I'm going from 4 to 8GB, nice!

It's too bad that NVLink is reserved for the quadro and tesla cards, I imagine SLI users might have benefitted from it, heck maybe even single card users. It would make them more costly though, I presume.
 
It's too bad that NVLink is reserved for the quadro and tesla cards, I imagine SLI users might have benefitted from it, heck maybe even single card users. It would make them more costly though, I presume.

NVLink is going to be an interconnection only present on motherboards dedicated to supercomputers and other systems dedicated to heavy computing...

PCIe 3.0 has plenty of bandwidth for current GPUs (gaming wise) even at 8x
 
NVLink is going to be an interconnection only present on motherboards dedicated to supercomputers and other systems dedicated to heavy computing...

PCIe 3.0 has plenty of bandwidth for current GPUs (gaming wise) even at 8x

No kidding, and here was I thinking they'd put it on the card *oops*

Edit:

I brought it up before but we didn't discuss it, so I'd like to bring it up again. The PCPER video showed performance increases in frame time and something else with Maxwell and newer Skylake CPUs vs Sandy Bridge. They ran the CPUs at stock. Am I right in expecting that overclocked Sandy Bridge CPUs would have closer or even on par results in those tests with stock Skylake? And thus alleviating the need to upgrade the CPU for Pascal?

I'm already buying a new monitor, but upgrading basically the entire system for a GPU makes little sense budget-wise.
 
No kidding, and here was I thinking they'd put it on the card *oops*

Edit:

I brought it up before but we didn't discuss it, so I'd like to bring it up again. The PCPER video showed performance increases in frame time and something else with Maxwell and newer Skylake CPUs vs Sandy Bridge. They ran the CPUs at stock. Am I right in expecting that overclocked Sandy Bridge CPUs would have closer or even on par results in those tests with stock Skylake? And thus alleviating the need to upgrade the CPU for Pascal?

I'm already buying a new monitor, but upgrading basically the entire system for a GPU makes little sense budget-wise.
I'm not sure what the additional features in the 6700K offers over the 2600K in gaming. I do know that it's not just about speed, though that plays a huge part. I think it depends on the overclock, the game, and the rest of your system. For instance, a 2600K at 4.8Ghz vs a 6700K at 4Ghz in 4K with 980ti SLI will show very little difference. But a 980 at 1080p with a 6700K at 4Ghz vs a 2600K at 4.6Ghz in games like GTA V or Crysis 3, you may see a more realistic difference.
 
Heyyo,

HBM2 sounds great and all... but I'm more interested in the compute gains. Better compute on Pascal could mean better Dx12 performance when async compute is used... but of course that will remain to be seen how popular that becomes much like multi-adapter.

That, and this whole tiered Dx12 support could mean issues depending on the game and what it uses. I still remember in Dx9 when NVIDIA skipped 24bit shader code and went straight for 16bit and 32bit shader code support... and of course Valve opted for 24bit shader code for the Source Engine and HL2 so the FX 5000 series of GPUs sucked until better 16bit shader code support was brought to the game and of course NVIDIA drivers updated... so I'm not doing a potential repeat of that. Gonna wait on more Dx12 games and see what technologies they favor.
 
Heyyo,

HBM2 sounds great and all... but I'm more interested in the compute gains. Better compute on Pascal could mean better Dx12 performance when async compute is used... but of course that will remain to be seen how popular that becomes much like multi-adapter.

That, and this whole tiered Dx12 support could mean issues depending on the game and what it uses. I still remember in Dx9 when NVIDIA skipped 24bit shader code and went straight for 16bit and 32bit shader code support... and of course Valve opted for 24bit shader code for the Source Engine and HL2 so the FX 5000 series of GPUs sucked until better 16bit shader code support was brought to the game and of course NVIDIA drivers updated... so I'm not doing a potential repeat of that. Gonna wait on more Dx12 games and see what technologies they favor.

The compute workloads for things involving FP32/etc don't really have much to do with Async. The async compute stuff is just the scheduler supporting multiple commands/queues at once and distributing them out to the GPU. The tier stuff won't cause any issues, it's designed purposefully to make it that way and is why it's tiered in the first place. It's just there incase a GPU supports it and can be enabled. The core DX12 improvements are universal across any tier.
 
I'm not sure what the additional features in the 6700K offers over the 2600K in gaming. I do know that it's not just about speed, though that plays a huge part. I think it depends on the overclock, the game, and the rest of your system. For instance, a 2600K at 4.8Ghz vs a 6700K at 4Ghz in 4K with 980ti SLI will show very little difference. But a 980 at 1080p with a 6700K at 4Ghz vs a 2600K at 4.6Ghz in games like GTA V or Crysis 3, you may see a more realistic difference.

Sure! Since I'm getting a 1440p monitor the differences might be so small a system upgrade isn't warranted, at least that's my hope... To out it another way, would you put a high end Pascal in my system?
 
Sure! Since I'm getting a 1440p monitor the differences might be so small a system upgrade isn't warranted, at least that's my hope... To out it another way, would you put a high end Pascal in my system?
Who says you need to put a high end Pascal in when a slightly lower end pascal would do the trick?
 
Back
Top