AMD Radeon HD 7970 Paper release in the US

Radeon HD 7970

  • Radeon HD 7970

    Votes: 0 0.0%
  • GTX 580

    Votes: 0 0.0%
  • Nvidia FanBoy I cant be honest

    Votes: 0 0.0%
  • AMD fanboy I cant be honest

    Votes: 0 0.0%

  • Total voters
    0

oneseraph

New member
Before I spill the beans on gaming performance, First a word about some new features of the Radeon HD 7970.

  1. Graphics core next "This is the replacement for the AMD VLIW4 SIMD engine"
  2. revamped tessellation engine "AMD claims a 1.7x to 4x performance increase, depending on the number of subdivisions applied to the source primitive."
  3. power tune and zero core "long story short this gives the 7xxx GPUs the best performance per watt of any GPU on the planet."
  4. PCIexpress 3.0 "today’s desktop software cannot seem to saturate PCI Express 2.0 slots, so it remains to be seen if this feature matters"
  5. DX 11.1 "this will be nice when windows 8 is released in mid 2012"
  6. openCL 1.2 "Good to see AMD supporting the developer community"
  7. Direct Compute 11.1 "this could yield some very impressive advancements when combined with GCN (graphics core next)"
  8. Partially Resident Textures "this one is tricky to explain, but trust me, this is potentially huge"
  9. Stereoscopic 3D Enhancements "yep 3D has eyefinity,crossfire support and custom resolutions"
  10. UVD and the new Video Codec Engine "New and improved AMD added dual-stream HD+HD acceleration to its newest iteration of the Unified Video Decoder"

There are more new features but I think I hit the highlights.

Now on to the performance.

Several review sites have bench-marked the Radeon HD 7970. The battery of test both synthetic and real world games included all the usual suspects. Despite AMD only having early release drivers available and and Nvidia's drivers being very mature. The Radeon HD 7970 trounced the GTX 580 in most cases and in the rare occasion when the GTX 580 managed to best the Radeon HD 7970 it was by only the smallest of margins. With that said, the Radeon HD 7970 is priced about $50.00 US higher that the GTX 580. So, is the performance gain worth the extra cost. Well that depends on what resolution you play your games at. If you rock your games at 1920x1080 or higher then the answer is probably yes. This is because at those resolutions and above the performance per dollar of the Radeon HD 7970 is higher than the GTX 580. If you consider performance per watt the Radeon HD 7970 is 30-40% more efficient than the GTX 580. I think it would take a very long time to recover $50.00 us on your power bill, but every little bit helps.

Here is something weird. I looked at reviews at 4 different sites. Three of the sites showed the Radeon HD 7970 noise level at max load 1db higher than the GTX 580, but the forth site showed it as 5db higher at max load. All of the sites show both cards at the same noise level at Idle. So I think there are some interesting questions about noise to be investigated.

Overclocking

The overall consensus is that the Radeon HD 7970 is an overclockers dream. All of the tests I read got at least 15% GPU overclocks without voltage changes. The memory overclocks started at 15% and one site achieved a 25% memory overclock without a voltage change. The overclock was stable enough for them to rerun their suite of benchmarks without a problem. Now I should point out that the sites have test samples, so it will be interesting to see how the retail units do. If the samples are any indication, the overclocking story of the HD 7970 could be pretty interesting. Oh did I mention dual bios.

Competition

So it looks like once again AMD has drawn first blood. Nvidia needs to come back strong with their next-generation Kepler architecture. Lets hope what Nvidia brings to the table is equal to the task. No walk in the park given what AMD has just released. Unfortunately Nvidia says the Kepler architecture won't see retail until late in the second quarter of 2012 "if everything goes to schedule". Six months is an eternity in this market-space. So lets all hope Nvidia gets a move on. Otherwise we won't see any real price drops on graphics cards for some time.
 
In reality we're still talking about -3 to +15 fps on games already running over 50 fps in the majority. If you already have a card from either of the manufacturers doing over 50 and you're happy with the quality of the end result, with or wothout physx or quality textures, there's little reason to upgrade.

Dx11->11.1 is arguably Dx10->10.1.

I don't personally think you'll see a 680. 780 may come along later in the year, I think in the main the 600 series will be merely die shrinks and coded accordingly.

Buying new, there's certainly a reason to question your bank balance as to what you want to buy.
 
In mid January I am building my new rig. I don't wont to build a SLI or crossfire system. By using a single GPU card I can save money on the motherboard, power supply and the case. What I do want is a graphics card that can run any DX11 game smoothly at 1920x1200. In addition having hardware support for DX 11.1 is important because I don't want to have to buy another card in 6 months when Windows 8 comes out. That seems like a huge waist of money.

So since I am building a new machine next month, I will either go with the HD 7970 or the HD 7950. Both cards are DX 11.1 complaint. The HD 7970 is far more powerful than the GTX 580 and only costs a bit more and the HD 7950 is probably equal to the GTX 580 and cost a bit less. Basically neither card should need to be upgraded for quite some time.

Unlike DX 10 to DX 10.1 the features added in DX 11.1 are pretty major. Mostly the SDK implements a set of API features that make DX 11 implementation far most efficient and more powerful. Below are some of the new features in Direct 3d 11.1

The following functionality has been added in Direct3D 11.1.

Shader tracing

Direct3D device sharing

Check support of new Direct3D 11.1 features and formats

Create larger constant buffers than a shader can access

Use logical operations in a render target

Force the sample count to create a rasterizer state

Process video resources with shaders

Change subresources with new copy options

Discard resources and resource views

Support a larger number of UAVs

Bind a subrange of a constant buffer to a shader

Retrieve the subrange of a constant buffer that is bound to a shader

Clear all or part of a resource view

Map SRVs of dynamic buffers with NO_OVERWRITE

Use UAVs at every pipeline stage

Shader tracing

Direct3D 11.1 lets you use shader tracing to ensure that your code is performing as intended and if it isn’t you can discover and remedy the problem. Shader tracing is implemented in D3dcompiler_nn.dll.

The shader tracing API consists of the following methods.

ID3D11RefDefaultTrackingOptions::SetTrackingOptions

ID3D11RefTrackingOptions::SetTrackingOptions

ID3D11TracingDevice::SetShaderTrackingOptions

ID3D11TracingDevice::SetShaderTrackingOptionsByType

ID3D11ShaderTraceFactory::CreateShaderTrace

ID3D11ShaderTrace::TraceReady

ID3D11ShaderTrace::ResetTrace

ID3D11ShaderTrace::GetTraceStats

ID3D11ShaderTrace::PSSelectStamp

ID3D11ShaderTrace::GetInitialRegisterContents

ID3D11ShaderTrace::GetStep

ID3D11ShaderTrace::GetWrittenRegister

ID3D11ShaderTrace::GetReadRegister

D3DCompile2

D3DCompileFromFile

D3DDisassemble11Trace

D3DDisassembleRegion

D3DGetTraceInstructionOffsets

D3DReadFileToBlob

D3DSetBlobPart

D3DWriteBlobToFile

Direct3D device sharing

Direct3D 11.1 enables Direct3D 10 APIs and Direct3D 11 APIs to use one underlying rendering device.

This Direct3D 11.1 feature consists of the following methods and interface.

ID3D11Device1::CreateDeviceContextState

ID3D11DeviceContext1::SwapDeviceContextState

ID3DDeviceContextState

The changes between DX 11 and 11.1 are pretty extensive. You can see why hardware support for the API is highly desirable.
 
Far more powerful ? hmmm

To be honest, the dx11.1 listing looks similar to me to the listing for 10.1 (additionals to 10).

It's a microsoft botch to push Windows 8 when it eventually arrives. I'd recommend you get a 8970 by that time, and hopefully it won't take the 9970's arrival before 11.1 is even thought about by game devs. They didn't like 10.1. Heck most don't like 10 still.

(understanding the console port thingy)
 
Far more powerful ? hmmm

To be honest, the dx11.1 listing looks similar to me to the listing for 10.1 (additionals to 10).

It's a microsoft botch to push Windows 8 when it eventually arrives. I'd recommend you get a 8970 by that time, and hopefully it won't take the 9970's arrival before 11.1 is even thought about by game devs. They didn't like 10.1. Heck most don't like 10 still.

(understanding the console port thingy)

Yes the HD 7970 is far more powerful than the GTX 580, metrics don't lie.

Could you show me that DX 10 to DX 10.1 listing. I seem to have lost my copy in my MSDN. My memory must be going because I don't remember DX 10.1 containing methods that allowed DX 9 and DX 10 API's to use one underlying rendering device. It's odd because at the time I was working on Alpha, Spec, image map and Trans shaders for the XNA SDK. Anyway the way I recall it is that DX 10.1 added Blend modes, floating point rules, Multisample Anti-Aliasing, Rasterization Rules, Texture Sampling and Culling Behavior. It seem like there were a couple of other little things but I cant remember. Anyway as you can see The DX 10 to 10.1 change is nothing like the the DX 11 to 11.1 change. Anyway if you could get me that listing I will be happy to give it another look.

I am afraid you have lost me a bit, what do you mean "8970". Are you referring to AMD's next generation of graphics cards? They just launched the 79xx cards. Windows 8 is set for release in may or June of 2012. Even if they slip by 90 days that's still august or September of 2012. Do you think AMD is going to release 89xx cards in the next 9 months? Or are you referring to Nvidias Kepler architecture or more commonly GTX 6xx series. I mean that makes sense because it won't release until late may or early June.

I don't know what game developers you work with but the devs I work with see API's as tools. The DX API's are tools for Xbox and PC development. I have never heard any of the professionals I work with complain about any of the DX API's. I can't imagine any of them being so unprofessional and petty.

So how many games have you shipped for the Xbox360 and the PC? I am very interested in your take on the "console port thingy".
 
2 votes for the 580 even though the 7970 is the better card with even more performance gains to come with better drivers and no ref versions?
blink.gif


I think you 2 guys should have ticked the "Nvida fanboy I can't be honest" box.
 
what sieb said. how can you choose a more expensive, more powerconsuming card with less support and way less performance ?
 
I know the 7970 is the better card, but I have a really good 580 so I voted for the 580. If I didn't have a Graphics Card I would have chosen the 7970 and I would have also bought one.
 
Yaeh, if you already have a 580 there is no point in upgrading but I think the vote was more to do with which is the better card.
smile.gif
 
I know the 7970 is the better card, but I have a really good 580 so I voted for the 580. If I didn't have a Graphics Card I would have chosen the 7970 and I would have also bought one.

Ya, I can see how that might affect your vote. The technology release cycle lottery can really suck sometimes. On another note how do you like your motherboard? It is one of the boards I am looking at for my new build.
 
Struck a nerve tho aye
wink.gif
It's not like those moaning about it clicked the AMD fanboi selection. Additionally if you have solice with your green card in your pc, why would anyone question it. I'll keep a donated 6970 until it has to go back, plays anything I rarely get a chance to play, but if the question is buying your own... well that's different.

Microsoft release the feature information for the d3d on each revision as it comes along for devs. The dev center still has legacy informations for previous incarnations that's pretty easy to find. Devs can find Dx7 stuff if they really really wanted to I'm sure.

---------------------

Direct3D 10.1 extends the feature set of Direct3D 10.0 with the following new features:

•Blend Modes - Independent blend modes per render target using the new blend-state interface (see ID3D10BlendState1 Interface). Dual source blending operations are restricted to render target slot 0; you may not write to other outputs or have any render targets bound to slots other than slot 0.

•Culling Behavior - Zero-area faces are automatically culled; this affects wireframe rendering only.

•Floating Point Rules - Uses the same IEEE-754 rules for floating-point EXCEPT 32-bit floating point operations have been tightened to produce a result within 0.5 unit-last-place (0.5 ULP) of the infinitely precise result. This applies to addition, subtraction, and multiplication. (accuracy to 0.5 ULP for multiply, 1.0 ULP for reciprocal).

•Formats - The precision of float16 blending has increased to 0.5 ULP. Blending is also required for UNORM16/SNORM16/SNORM8 formats.

•Multisample Anti-Aliasing - Multisampling has been enhanced to generalize coverage based transparency and make multisampling work more effectively with multi-pass rendering. To achieve this, all multisample semantics are defined as if the pixel shader always runs once per sample (sample-frequency), computing a separate color per sample. If a pixel shader doesn't use any per-sample attributes, then it will compute the same value for each covered sample in a pixel. In that case, it is equivalent to the hardware executing the shader once per pixel (pixel-frequency), replicating the result to all covered samples. Naturally, running at pixel-frequency always produces the same results as running the same shader at sample-frequency, when the attributes are sampled at a pixel-frequency. The PSInvocations pipeline statistic increments at sample-frequency unless the shader is running at pixel-frequency.

•Pipeline Stage Bandwidth - Increased the amount of data that can be passed between shader stages: Resource Limits

Registers between Shader Stages 32 (32-bit x 4-component)

Vertex Shader Input Registers 32

Input Assembler input slots 32

•Rasterization Rules - The rules for rasterization have changed for lines, in addition, new functionality has been added.

•MultisampleEnable only affects line rasterization (points and triangles are unaffected), and is used to choose a line drawing algorithm. This means that some multisample rasterization from Direct3D 10 are no longer supported.

•New sample-frequency pixel shader execution with primitive rasterization.

•Resources - CopyResource is enabled in two new scenarios:

•Both color and depth/stencil MSAA surfaces can now be used with CopyResource as either a source or destination

•Format Conversion while copying between certain 32/64/128 bit prestructured, typed resources and compressed representations of the same bit widths.

•Texture Sampling - sample_c and sample_c_lz instructions are defined to work with both Texture2DArrays and TextureCubeArrays, use the Location member (the alpha component) to specify an an array index.

•Views - TextureCube and the new TextureCubeArray (see D3D10_TEXCUBE_ARRAY_SRV1) are not actual resources, but are new views on a Texture2DArray resource. Create a resource view from a Texture2DArray resource with a new usage flag (D3D10_RESOURCE_MISC_TEXTURECUBE), use the new ID3D10ShaderResourceView1 Interface interface to bind a cube-texture view to the pipeline.

The new features require a 10.1 device type (see ID3D10Device1 Interface) which can be created by calling D3D10CreateDevice1, or you can create the device and swap chain at the same time by calling D3D10CreateDeviceAndSwapChain1.

In Windows Vista Service Pack 1, Direct3D 10.0 and Direct3D 10.1 DLLs exist side-by-side on the system. To access 10.1 features, do either of the following:

Accessing 10.1 Features on Vista Gold and Vista Service Pack 1

Developers that wish to support Vista Gold as well as SP1 will have to account for the lack of the new 10.1 API extensions on Vista Gold. Both DXUT and D3DX10 will provide convenience functions to create the appropriate device, based on the DLLs available on the system and the available hardware (10.0 or 10.1). The 10.1 device inherits from the 10.0 device, and can be retrieved using QueryInterface(). It is recommended that each application keeps track of the device type and maintains a pointer to the 10.1 device (if available) to avoid frequent QueryInterface calls when 10.1 functionality is desired. Likewise, where 10.1 resource views and state objects are associated by an application's custom class, it is recommended that the application track whether the object is a 10.0 or 10.1 type to avoid redundant QueryInterface() calls. D3DX10 includes a set of utility functions to simplify this process (see D3DX10CreateDevice and D3DX10CreateDeviceAndSwapChain).

Accessing 10.1 Features on Vista Service Pack 1 Exclusively

Some developers may choose to require Vista Service Pack 1, which will be distributed broadly to end-users and includes a series of improvements outside of Direct3D 10.1. These developers can use the Direct3D 10.1 headers and libraries exclusively, taking a dependency on the Direct3D 10.1 DLLs which support both 10.0 and 10.1 hardware (some calls may fail, however, on 10.0 devices where the new functionality is not supported).

Some additional notes:

•The APIs exposed in the D3DX10.dll will accept both 10.0 and 10.1 devices, and will take advantage of 10.1 functionality when available.

•D3D10SDKLayers.dll supports a 10.1 device and can output the appropriate debug spew for 10.1 features.

•D3D10Ref.dll implements a 10.0 and 10.1 software device.

•D3DX10 and FXC support the updated 10.1 shader model with the following targets: vs_4_1, gs_4_1, ps_4_1, and fx_4_1 which can be bound to a 10.1 device. A 10.1 device supports shader model 4.0 and 4.1 shaders.

•The Direct3D 10.0 effect framework supports 10.0 and 10.1 devices, however, any technique that includes shader model 4.1 shaders or the new 10.1 features must use a 10.1 device.

Related topics

Programming Guide for Direct3D 10

---------------

Which really means nothing to the lay person. We'll all be eventually using Windows 8 or atleast Dx11.1 without a care in the world what or who uses the apis. Which is where posting information about what the newer x.1 becomes irrelevant. For the great majority, this isn't going to worry them until atleast time next year I'm sure. Windows 8 will be out already, and we'll STILL be installing Dx9 from the dvds in a great number of cases. (i.e. the console port comment, taken amazingly out of context but hey we know how that goes) Heck most will still be looking for a reason not to keep Windows 7.

If the 8970 *atleast* isn't out before a purely unpatched Dx11.1 games comes out, I'll be beyond surprised. Put it that way, and not offensively.
 
Ya, I can see how that might affect your vote. The technology release cycle lottery can really suck sometimes. On another note how do you like your motherboard? It is one of the boards I am looking at for my new build.

It's a really good motherboard and there is nothing much else to say to be honest. I have my CPU OC'd to 4GHz on only 1.18V. I'm not really sure if that's to do with the CPU or the motherboard or both, but so far the board hasn't given me any problems and it looks the part as well.
 
Struck a nerve tho aye
wink.gif
It's not like those moaning about it clicked the AMD fanboi selection.

Didn't strike a nerve just surprised that people would still vote 580 even though the 7970 is the much better card. Fair enough one person who voted already has a 580 but the vote is more of a which is the better card vote, not really a I already have a 580 so i'll vote for that.

To be honest I don't care what name is on the card I have just as long as it's the one that offers the best performance for my money, at this point in time if it was between the 580 and 7970 the obvious answer is the 7970.

I hate fanboism with passion there is nothing more annoying than a delusional fanboy who bases all his decisions on a brand instead of the product and how it compares to others.
 
It's a really good motherboard and there is nothing much else to say to be honest. I have my CPU OC'd to 4GHz on only 1.18V. I'm not really sure if that's to do with the CPU or the motherboard or both, but so far the board hasn't given me any problems and it looks the part as well.

Thanks
 
Didn't strike a nerve just surprised that people would still vote 580 even though the 7970 is the much better card. Fair enough one person who voted already has a 580 but the vote is more of a which is the better card vote, not really a I already have a 580 so i'll vote for that.

To be honest I don't care what name is on the card I have just as long as it's the one that offers the best performance for my money, at this point in time if it was between the 580 and 7970 the obvious answer is the 7970.

I hate fanboism with passion there is nothing more annoying than a delusional fanboy who bases all his decisions on a brand instead of the product and how it compares to others.

Manufacturers would prefer you call it "brand loyalty", with a passion
tongue.gif


It's a nonsense tbh. But if someone's proud of the AMD sticker on their case - fantastic.

It's a similar thing I think about with cpus. I'd love AMD to come out with something... well tbh Intel haven't budged my opinion of upgrading from a qx9650 tbh.

Oh well.
 
Are you happy now guys
tongue.gif
. I just based my initial decision on whether or not I would upgrade, given the performance shown on the many reviews.
 
Back
Top