AMD's on the path to Exascale computing with their new Radeon Instinct MI100 HPC GPU

That's a seriously strong card. 120 Compute Units?! That's going to be a pretty big die. Wonder how they managed to alter this architecture compared to RDNA2. Would be cool to see the differences.
 
That's a seriously strong card. 120 Compute Units?! That's going to be a pretty big die. Wonder how they managed to alter this architecture compared to RDNA2. Would be cool to see the differences.

I'm guessing that the Infinity Cache is gone, and that makes space for compute units. Cutting away other gaming features will also give more space for additional compute.

With this market offering richer margins, AMD may also be getting more aggressive with their die sizes.
 
You wrote in the article specifically mentioning how the Infinity cache improves performance and the AMD source specifically states it's there in multiple sections. I highly doubt they would alter such a major component of the underlying architecture. Altering cache layout would be a massive and expensive undertaking.

Not sure why you are under the assumption all of a sudden it's gone? Unless I'm reading what you said wrong.

I'm assuming the gaming stuff is out, maybe altering the focus of the RT cores to more compute focused tasks and the support needed for that alongside a large die size increase.. still would like to see a direct comparison just because it's cool
 
You wrote in the article specifically mentioning how the Infinity cache improves performance and the AMD source specifically states it's there in multiple sections. I highly doubt they would alter such a major component of the underlying architecture. Altering cache layout would be a massive and expensive undertaking.

Not sure why you are under the assumption all of a sudden it's gone? Unless I'm reading what you said wrong.

I'm assuming the gaming stuff is out, maybe altering the focus of the RT cores to more compute focused tasks and the support needed for that alongside a large die size increase.. still would like to see a direct comparison just because it's cool

RDNA 2 and CDNA are not the same. Both architectures have different requirements and therefore have different features. Infinty Fabric and Infinity Cache are not the same.

I don't know if CDNA has an infinity cache, that's why it is a guess, but with that much HBM2, do they need an on-die cache?

AMD makes no mention of Infinity Cache in their spec page or video.

https://www.amd.com/en/products/server-accelerators/instinct-mi100

CDNA and RDNA 2 are designed specifically for certain workloads and price points.

Infinity Cache is a cost-effective solution for AMD and gamers, but when you are using four stacks of HBM2, the cache isn't that necessary. My guess is that AMD saved the die space for additional compute units.
 
I guess we will find out once they release more info if they have an infinity cache or not. It's always faster to reach cache than off die memory. Regardless of HBM2 or not. It can still be useful for big data applications or DNN workloads (which are extremely memory pipeline intensive)

For the record, I understand they are different and noted such. Doesn't mean they don't share technologies. It's very reasonable to assume they use both. Last time they used HBM they used an interconnect. Now they could be using Infinity Fabric to supplement that and add cache at the same time. Again just wait and see i suppose.
 
Wonder if it's even possible to buy one - I've never seen a MI-60 in the wild. You can pick up MI-50s easily enough.
And can you still play with ROCm on a RDNA card?
 
Back
Top