the big one

bazx

New member
http://www.xtremesystems.org/forums/showthread.php?t=160416

http://downloads.guru3d.com/download.php?det=163

Minor bugfixes:

Fixed I2C write routine for ATI R600 graphics processors family, which has been erroneously commented the previous version's source code.

Minor UI and localization fixes.

What's new:

Updated databases for Detonator and ForceWare drivers. Added databases for ForceWare 163.69 and 163.71.

Improved driver-level overclocking module for NVIDIA display adapters:

Added new user interface for independent G8x GPU family shader clock control interfaces of the ForceWare 163.67 and newer drivers. New UI includes:

New independent slider for adjusting shader domain clock.

New "Link clocks" option allows you to use either traditional shader/ROP clock
ratio based overclocking or completely asynchronous shader/ROP clock overclocking. Now you may either tick "Link clocks" options and adjust ROP clock only, allowing RivaTuner to overclock shader domain using VGA BIOS default shader/ROP clock ratio similar to pre-163.67 drivers, or untick "Link clocks" options and adjust domain clocks fully independently.

New overclocking profile format supporting independent shader clocks. Please take a note that old overclocking profiles are not supported by RivaTuner, so you must recreate previously existing overclocking profiles.

Previously available power user oriented ShaderClockRatio registry entry is now obsolete and it no longer exists in RivaTuner's database. Previously available ratio based shader domain overclocking functionality is now fully covered by new independent shader clock slider and new "Link clocks" option.

New user interface is provided by default under ForceWare 163.67 and newer drivers under Windows Vista, however Windows XP owners can also force Vista specific overclocking interfaces usage by setting NVAPIUsageBehaviour to 1. If needed, shader clock control can be forcibly disabled and old traditional overclocking module UI appearance can be forced by setting NVAPIShaderClockControl to 0.

Power user oriented adjustable minimum and maximum clock slider limits have been expanded from 25%-300% to 10%-800%.

Added experimental SLI overclocking for Vista. Please take a note that I have no SLI rig for testing and development, so this feature has been added blindly and RivaTuner still doesn't provide official SLI support.

Minor UI changes and improvements.
 
just opened it up and there it is

and i wonder how long the beta testers have have this function

shaderclocks.gif
 
I noticed this one earlier but was too immersed in some ddr3 action to try it.

Talk about being able to maximise the clocks now - should finally be able to fully optimise these babies - I can imagine Baz ure like me, grinning like a cheshire cat :)
 
mav this is what make me want to bench 3d

you might want to fix the the title a bit lol

you will need this edit to make it work

only with 163.67 driver and up

you need to goto power user>

Power User>Rivatuner\System>NVAPIUsageBehavior set value to 1

and up pops the extra slider
 
Bugger,even more i have to learn about these nvidia thingies.I'm so out of touch i'm not even sure of the significance,but if you guys are raving,it must be good.
 
Only just got round to using this but thought I would share my findings on 8800GTX.

I found that by keeping the shader clock at 1600mhz I can squeeze more from the gpu clock and improve scores on 3DM06 - when I find the ceiling I will report more.
 
Maxxed out score using what I have now on my everyday installation, optimised only by using taskmanager to drop unwanted programs - 13474 :

http://service.futuremark.com/compare?3dm06=3393547

I was unable to better the above score today.

Using the same unlcean install and using Riva to Keep shader clock at 1590mhz and increasing the gpu clock from the previous max of 653 to 668 I achieved 13588:

http://service.futuremark.com/compare?3dm06=3393899

Interestingly now when we compare against the score I achieved with same hardware and a clean/optimised install but CPU was X6800 at 4.4ghz was 13604. If you compare you can see that my FPS scores are higher now than they were back then:

http://service.futuremark.com/compare?3dm06=2586440

In fact it's safe to say that if I popped the X6800, phase cooled it and ramped it up to 4.4ghz and spanked a fews scores out using teh current set-up and riva shader options - the improvement would be worthy.

I won't try and sell you this theory as I have not used exact science in the above, but in my opinion there is more mileage to be gained by holding a an optimised shader clock value and pushing the GPU clock higher.

Also - if the unit was voltmodded I am sure that the unit could be controlled and clocked further than previously thought possible.

However, it's a ballsache to do and I was unable to secure a card to use at the price I was willing to pay - given new cards are on there way I won't invest until I see a clear winner.
 
Back
Top