Nvidia promises 40% Performance Boost in Anthem through DLSS

I guess you could force the output to 4K and then let it downsample naturally? Must be settings in NVidia control panel to force that(Or I guess in most modern games if you just add the resolution option for your monitor)? Though not sure how worthwhile that would be image quality or performance wise.
 
I guess you could force the output to 4K and then let it downsample naturally? Must be settings in NVidia control panel to force that? Though not sure how worthwhile that would be image quality or performance wise.

Dont think so in the gui. But for those experienced with Nvinspector, im sure there are parameters there to configure this.
 
Getting a bit fed up with Ray Tracing and DLSS as they offer very little compared to the extra cost involved with the cards.

Ray Tracing does not offer a game changing experience and is often hardly noticed.

DLSS offers higher fps but the same can be achieved more or less by turning other settings down.

In light of the above would I still have bought my RTX cards, of course I would as they are about 30% faster than Pascal.

Having said that in life there are priorities far more important than gaming and those are what most people would rather spend their money on.


Turing architecture -
Fast cards.
Gimmicky features like RTX and DLSS.
Very overpriced and terrible value.

NVidia please don't release another crap architecture like Turing again.

Even if NVidia had released a cut down Volta architecture for gaming it would have been both faster and cheaper than Turing, what a joke.
 
They offer very little now because its new software. Give it time and it'll improve.
Ai isn't going anyway, it'll only get better. But it sure as hell better not get more expensive but being as greedy as Nvidia is, it will probably get more expensive.
 
They offer very little now because its new software. Give it time and it'll improve.
Ai isn't going anyway, it'll only get better. But it sure as hell better not get more expensive but being as greedy as Nvidia is, it will probably get more expensive.

To make it work better NVidia have to add more transistors to their chips which will increase the price.

The reason things like Ray Tracing are not that good is NVidia are asking too much of the node the Turing cards are on which means very large expensive chips.

I very much doubt that Ray Tracing will get much better for Turing as we are already nearly 6 months into the Turing architecture and in another 6 months we will be looking at its successor.

NVidia have made a total mess of Ray Tracing on Turing and it is their responsibility and failure that there is not adequate numbers of games that can run all the features of RT on the cards. For this NVidia should be heavily criticised not the software companies who were forced to deal with inadequate hardware that can not properly use Ray Tracing. Only when people really criticise NVidia for this mess both with their wallets and their opinions will they start producing cards that are fit for purpose.
 
They did exactly the same with programmable shaders to be fair, first gen hardware is always the least efficient by a long shot, but someone has to make it before you can get wide scale testing and development.

Even the few ray tracing software implementations that have been available have come a long way in the roughly 6 months since Turings launch though and upcoming implementations seem to improve on that efficiency further, but of course real progress comes from both mature software & hardware together, but every early adopter should know this already.

But I doubt we'll be seeing an NVidia replacement in the next 6 months, they havn't even finished the Turing roll out yet, and if they were to use first gen 7nm they'd already be taping out but 7nm demand statements from Samsung & TSMC don't seem to indicate that. It's quite likely NVidia are waiting for EUV for their next generation cards, which would put 2021 as the earliest we'd expect it, but that gap still wouldn't be nearly as big as the two and a half years between Turing and Pascal.
 
To make it work better NVidia have to add more transistors to their chips which will increase the price.

Not true. You can still increase IPC of the current implementation by better designs. On top of that by decreasing node sizes which they will do in the future, they can fit more transistors into the same area as now. Thereby also increasing performance for the same amount of money. Assuming 7nm is more mature and space available for Nvidia to produce the amount they want, it would be cheaper or same price. Nvidia will as always increase the price, because they can and love there 80% margins.

On the software side you can still increase performance as well. As it matures it will improve everything about it. Can do that now and with future hardware.
 
Getting a bit fed up with Ray Tracing and DLSS as they offer very little compared to the extra cost involved with the cards.

Ray Tracing does not offer a game changing experience and is often hardly noticed.

DLSS offers higher fps but the same can be achieved more or less by turning other settings down.

In light of the above would I still have bought my RTX cards, of course I would as they are about 30% faster than Pascal.

Having said that in life there are priorities far more important than gaming and those are what most people would rather spend their money on.


Turing architecture -
Fast cards.
Gimmicky features like RTX and DLSS.
Very overpriced and terrible value.

NVidia please don't release another crap architecture like Turing again.

Even if NVidia had released a cut down Volta architecture for gaming it would have been both faster and cheaper than Turing, what a joke.


Couldn't agree more, We got roughly a 27% performance increase with the 2080 Ti vs a 1080 Ti yet a 60% price increase, That's ludicrous, If we compare previous generations performance/price increases this gen should've actually been cheaper considering the lacklustre perf increase.
 
The gains per genetation from hardware(ASIC) optimisation alone are often single digit figures now though, sometimes not even that, there was a great research paper recently that analysed the gains of ASICs over time and what gains were from optimisation and which were from transistor scailing: http://parallel.princeton.edu/papers/wall-hpca19.pdf

The results for modern GPUs are below, (the gains possible from optimisation seems to tend towards a finite value over time for eacch type of ASIC and so the gains possible get smaller and smaller, they define some formulas in there for it regarding the CMOS scailing figures):
775c94c0073a1cbf70b11ac3ac79e2aa.png

As you can see, while we've improved around 45% over 7 years with 99th percentile frames through optimisation, general gains are in the ~20% over ~7 years range, with around 4 times more performance gain coming from transistor scailing in that time, the fact is it's hard to play with new and interesting concepts without that bump in transistor budget that allows higher complexity blocks, which is why Pascal to Turing, on the same node, had to increase die size considerably to achieve that performance gain, with the price per mm of Pascal and Turing roughly equivalent across each price segment of the stack at launch(And Turing now cheaper per mm^2), with the marketing names being the main change.
GTX1060 = 200mm(^2) full, GTX1660Ti = 288mm full
GTX1080 = 314mm, RTX2070 = 445mm, both fully enabled chips
GTX1080 Ti = 471mm partial enable, RTX2080 = 545mm, fully enabled
RTX2080Ti = 754mm fully enabled

As you can see Turings size increases don't "pay off" with current performance gains, IE technically Pascal scaled better per mm^2 increase, but that's because Turing's optimisations that take up those extra die space arn't properly put to use yet, it's the easiest way you can essentially mathematically prove Turing isn't optimally used in most current titles and requires some more work to get the most out of it, a Turing die with many modern games will be mostly dark silicon at the moment.
 
Last edited:
People are way to forgiving with PC hardware manufacturers and should not be so.

I remember when the CD player first launched in 1983, it was a huge technical achievement, it was not perfect and it was very expensive.

The one thing it did do is what it said on the box - it had no problem playing CDs to a high standard of music playback. Philips and Sony also made sure that there was a good number of music titles available from day one that could be used with the hardware.

Now look at RTX and DLSS with Turing 6 months after launch and we see very few game titles and none of them can use all the features, underpowered hardware including the RTX Titan and absolutely no chance of things improving greatly before Turing goes EOL.

Why are PC hardware users prepared to put up with this garbage from NVidia or anyone else who wants to launch a new product?

What ever happened to right first time?

Ray Tracing is very poor on Turing and unlikely to run well until we have seen a couple more node shrinks to increase performance and reduce price.
 
DLSS offers higher fps but the same can be achieved more or less by turning other settings down.
The thing is, DLSS offers higher FPS while offering better image quality at the same time, so no need to fiddle with settings and ending up with a slightly worse image.

That said, this only works, if DLSS works correctly (which the old Metro Exodus review showed).


After watching the video, I'm quite confident, that DLSS is properly implemented into Anthem and the results are noteworthy, imho. It's literally FREE 10-30FPS.
 
To make it work better NVidia have to add more transistors to their chips which will increase the price.

The reason things like Ray Tracing are not that good is NVidia are asking too much of the node the Turing cards are on which means very large expensive chips.

I very much doubt that Ray Tracing will get much better for Turing as we are already nearly 6 months into the Turing architecture and in another 6 months we will be looking at its successor.

NVidia have made a total mess of Ray Tracing on Turing and it is their responsibility and failure that there is not adequate numbers of games that can run all the features of RT on the cards. For this NVidia should be heavily criticised not the software companies who were forced to deal with inadequate hardware that can not properly use Ray Tracing. Only when people really criticise NVidia for this mess both with their wallets and their opinions will they start producing cards that are fit for purpose.

Get what you are saying, but aren't you being a little hypcritical. You are slamming Nvidia for a poor product, and rightly so. But telling us to vote with our wallets and you yourself have purchased 2 or 3 confuses me. If anything its you guys (the loyal ones and benchmarkers) who need to make a stand.

It matters not if you are wealthy and that the cost of the GPU is 10mins work at the office. The fact is continuing to purchase that card tells Nvidia that they can continue their marketing strategy.

All those names, you see topping 3dMark benchmarks time and again are familiar names. When THOSE people make a stand, you set a president that enough is enough. Nvidia view these benchmarks. Once they see a shift in the pattern, they will start to think again.
 
Yeah no one's being forgiving to NVidia at all(I've never personally bought an NVidia card though I've been given a couple before), Turing cards are already low sellers, they more or less admitted they mispriced them based on misunderstanding gaming demand during the crypto boom and they misestimated gaming demand, but what you're talking about is the issues everyone expected from first gen hardware. CD players were far from perfect with their skipping at vibrations and stuff, theres always niggles no one thinks of till the device is widely used.

But that doesn't change the fact DirectX Raytracing, that RTX is a version of, is the most widely and rapidly adopted DirectX extension ever, to say its not got support when anyone can see its got the backing of every major game engine in the industry within a window of time a fraction of the size expected for essentially rewriting paradigms and concepts that are decades old. RTX cards won't be the only Raytracing cards on the market by the end of the year, while NVidia isn't expected to replace Turing for a while.

Everyone's know consumer hardware/software optimisation development cycles are a chicken and egg scenario, you can't develop one without developing the other, you can't skip to a point when everything's perfect. These systems are far, far too complex for that, a paradigm shift this major will take years of research to perfect, and expecting otherwise shows a complete lack of understanding at the scale and complexity of the feat this huge alliance of companies from across the industry is undertaking.

Raytracing isn't just a tack on effect like a GameWorks feature or something, it's a completely different rendering pipeline, and the fact we've got hybrid implementations working so well already is a feat in itself.
 
Last edited:
Using CD players as the example.

When I bought mine on launch day 01 March 1983 the record shops already had hundreds of titles available to buy, even if they did have to dig them out of their stock room lol.

When Turing cards were launched there precisely zero game titles available that worked and 6 months down the road that has hardly improved, this is not good enough.

Again going back to the CD player launch, this was a far bigger technical achievement than Ray Tracing on Turing. To Sony and Philips credit though they did put in a lot of hard work to ensure the launch went well and had very senior management involved with all aspects for many months before launch day. NVidia have not even come to close to this and just expect gamers to buy the cards based on their CEO giving a very dodgy product launch.
 
CD players is a really bad example though, no software development was required, and all music already (obviously) worked when played without artistic changes needing to be made to the media itself.

How about you look at the introduction 3D graphics pipelines to gaming instead? That was the last time there was a shift in the ways games rendering pipelines worked as large as this one.

Or maybe programmable shaders/pipelines(Launched with/for CUDA cores)? That's the second closest, It took so long that the first API for it, DX10, was superseded by DX11 before games started to take advantage of the features.

Calling the CD player a bigger achievement is incredibly subjective, it was just another way to store 1's and 0's built on the concept of magnetic tapes and laser etched disks, and took only around a decade of research and development to go from concept to shipping product. Raytracing has been around for just as long as CDs, but has taken all that time in research and development just to get near realtime usability in a practical sense.

Raytracing is not some NVidia only or developed feature, neither are current implementations of RTRT, this is a cross-industry API developed with all three major hardware vendors together, built on decades of progress in professional raytracing products and decades of academic research and theoretical mathamatics. Don't take Jen-Hsun's stage rubbish at face value.
 
Last edited:
Always thought when Direct 3D and 3dFX stepped into the mix we saw a radical change into GPU processing and routines.
 
CD players is a really bad example though, no software development was required, and all music already (obviously) worked when played without artistic changes needing to be made to the media itself.

How about you look at the introduction 3D graphics pipelines to gaming instead? That was the last time there was a shift in the ways games rendering pipelines worked as large as this one.

Or maybe programmable shaders/pipelines(Launched with/for CUDA cores)? That's the second closest, It took so long that the first API for it, DX10, was superseded by DX11 before games started to take advantage of the features.

Calling the CD player a bigger achievement is incredibly subjective, it was just another way to store 1's and 0's built on the concept of magnetic tapes and laser etched disks, and took only around a decade of research and development to go from concept to shipping product. Raytracing has been around for just as long as CDs, but has taken all that time in research and development just to get near realtime usability in a practical sense.

Raytracing is not some NVidia only or developed feature, neither are current implementations of RTRT, this is a cross-industry API developed with all three major hardware vendors together, built on decades of progress in professional raytracing products and decades of academic research and theoretical mathamatics. Don't take Jen-Hsun's stage rubbish at face value.

The differences between analog v digital for music reproduction going from vinyl to CD are a huge subject and far far bigger than getting Ray Tracing to work and there are lots of areas that are not obvious to people looking at the subject casually. I am not an expert on the subject but have seen enough technical debates to know that it makes Ray Tracing look like child's play. Weirdly enough back in the 1980s going to digital from analog was sometimes visually demonstrated like the effect DLSS has on graphics lol.

As I say I am not an expert but I know there a couple of people on these forums who do know a lot more than me about the subject.

Have you ever wondered why Vinyl has become popular again, there is a lot more going on than the fact it is a bit distorted being analog. Some people even like old style valve amps and for good reason.
 
Back
Top