DLSS Won't Be Available in EA's Anthem at Launch - Ray Tracing "Could be Added Later"

No that's not right.

I can tell you quite easily why DX12 is problematic.
Decades of driver code vs potentially a single person/small team who's never wrote a driver before. It's going to take years for this to mature, but it's a step in the right direction.

That is my point, the Game devs are not spending the money and getting their act together which is quite disgusting considering how much money they rake in from some of these games.

How long do they need to get DX12 working, at this rate there will be a totally new API to use by the time they get close.

For one new game release I read that the game dev decided DX12 was not worth it and went back to using just DX11 as there was little benefit in the newer API.

Gamers should demand a lot more than the rubbish they have to put up with when new games are launched with dodgy APIs and terrible bugs.

I think it is absolutely disgusting where the industry is going with mega expensive hardware and overpriced software.




Yep, what SPS said. DX12 in itself is not buggy, does not limit mGPU support in any way, and can be far less memory hungry than older APIs. The very first implementation of DX12 we got was actually excellent, in the Battlefield games, where it delivered *major* benefits to hardware that actually supported DX12, same with Mantle. And no, Mantle is not "In no new games", it's actually in quite a lot, in fact, some next gen engines (Including idTech7) use Vulkan/Mantle *exclusively*:
https://en.wikipedia.org/wiki/Vulkan_(API)#Software_that_supports_Vulkan
It's also quickly becoming the prominent API on MacOS through the MoltenVK layer.

You have to differentiate between the *technology* and an *implementation* of the technology. Low-level APIs like DX12 and Vulkan are initially meant for giant developer teams, several hundred strong multi-billion dollar teams, like DICE, with direct collaboration with hardware vendors. Unless developers essentially rewrite half a GPUs driver for every type of architecture there's not gonna be a whole lot of benefit to a DX12 implementation, developing a DX12 backend can be faaaaaaaaaar more work and far more complicated than creating the game itself. It'll take time to build up libraries and resources that are both optimised enough & clean enough for more normal sized developer teams to start using it properly.

Saying DX12/low-level APIs are bad because the incredibly mature DXold platform can still keep pace with the relatively young implementations of both hardware & software is like someone in 1900 saying cars are pointless technology because horses can still be faster. Most GPUs on the market don't even have proper DX12/Vulkan hardware support yet, and even NVidia's latest and greatest miss what many consider to be key features of it.

You have to think about the costs, is it really worth doubling a games budget to implement a DX12 backend that most hardware can't use properly. For most developers now, no it clearly isn't, but that's not their fault.



I have absolutely zero sympathy for game devs on this.

If they can not get DX12 right in a game they should not even include it and tell Microsoft where to put it.

There is no excuse in any business to produce products that are of unacceptable quality.
 
Last edited:
But developer budgets in the game industry have grown in line with profits, and minimum income capital for a AAA game has grown almost exponentially with modern expectations, throwing into that mix an API which is built primarily to give larger development teams more complex tools to fine tune their code on certain hardware, but requires exponentially more work to use as well as direct collaboration with hardware vendors, can complicate things much further. The fact is, for all but the largest of dev teams, the work required to be put into a DX12 implementation that most hardware doesn't have a full featureset to use properly would be a completely pointless exercise right now, the costs would far outweigh the benefits and that's work they could be putting into the game, with modern AAA game budgets often being in the range of hollywood movie budgets.
 
But developer budgets in the game industry have grown in line with profits, and minimum income capital for a AAA game has grown almost exponentially with modern expectations, throwing into that mix an API which is built primarily to give larger development teams more complex tools to fine tune their code on certain hardware, but requires exponentially more work to use as well as direct collaboration with hardware vendors, can complicate things much further. The fact is, for all but the largest of dev teams, the work required to be put into a DX12 implementation that most hardware doesn't have a full featureset to use properly would be a completely pointless exercise right now, the costs would far outweigh the benefits and that's work they could be putting into the game, with modern AAA game budgets often being in the range of hollywood movie budgets.

You are just making excuses for a bit of software that is too expensive and complicated to use properly.

One of the big plugs for DX12 was how it was going to make things easier to do, what a farce this has turned out to be.
 
Yep, what SPS said. DX12 in itself is not buggy, does not limit mGPU support in any way, and can be far less memory hungry than older APIs. The very first implementation of DX12 we got was actually excellent, in the Battlefield games, where it delivered *major* benefits to hardware that actually supported DX12, same with Mantle. And no, Mantle is not "In no new games", it's actually in quite a lot, in fact, some next gen engines (Including idTech7) use Vulkan/Mantle *exclusively*:
https://en.wikipedia.org/wiki/Vulkan_(API)#Software_that_supports_Vulkan
It's also quickly becoming the prominent API on MacOS through the MoltenVK layer.

You have to differentiate between the *technology* and an *implementation* of the technology. Low-level APIs like DX12 and Vulkan are initially meant for giant developer teams, several hundred strong multi-billion dollar teams, like DICE, with direct collaboration with hardware vendors. Unless developers essentially rewrite half a GPUs driver for every type of architecture there's not gonna be a whole lot of benefit to a DX12 implementation, developing a DX12 backend can be faaaaaaaaaar more work and far more complicated than creating the game itself. It'll take time to build up libraries and resources that are both optimised enough & clean enough for more normal sized developer teams to start using it properly.

Saying DX12/low-level APIs are bad because the incredibly mature DXold platform can still keep pace with the relatively young implementations of both hardware & software is like someone in 1900 saying cars are pointless technology because horses can still be faster. Most GPUs on the market don't even have proper DX12/Vulkan hardware support yet, and even NVidia's latest and greatest miss what many consider to be key features of it.

You have to think about the costs, is it really worth doubling a games budget to implement a DX12 backend that most hardware can't use properly. For most developers now, no it clearly isn't, but that's not their fault.

But objectively (or even subjectively) how much time is needed? When will DX12 be the standard? When will we start to see all that hype realised? DX12 was announced five years ago and released three and a half years ago. The first game using it came out a few months later, AotS, and was not exactly a runaway success. Neither was BF1. For months and months even AMD users were forced to use DX11 because DX12 was unusable in that game. It's starting to look like DX12 for the desktop was a pipe dream. You yourself say that it'll take vast amounts of time and money to create an infrastructure to support it. So why does it exist? Why was it hyped? Why does GCN (especially Vega) and Turing support it better than Haswell/Pascal, two incredibly expensive architectures? It feels like DX12 was a ploy to get people to move to Windows 10 and buy new hardware. Fiji used DX12 as its centrepiece and now Fiji is basically DOA for many gamers. I think Kaapstad is saying that that isn't fair on us as consumers. We're paying for a concept, not a product. The reason why it's still little more than a concept is understandable and not a lot can be realistically done about, but we're still paying for it.
 
But objectively (or even subjectively) how much time is needed? When will DX12 be the standard? When will we start to see all that hype realised? DX12 was announced five years ago and released three and a half years ago. The first game using it came out a few months later, AotS, and was not exactly a runaway success. Neither was BF1. For months and months even AMD users were forced to use DX11 because DX12 was unusable in that game. It's starting to look like DX12 for the desktop was a pipe dream. You yourself say that it'll take vast amounts of time and money to create an infrastructure to support it. So why does it exist? Why was it hyped? Why does GCN (especially Vega) and Turing support it better than Haswell/Pascal, two incredibly expensive architectures? It feels like DX12 was a ploy to get people to move to Windows 10 and buy new hardware. Fiji used DX12 as its centrepiece and now Fiji is basically DOA for many gamers. I think Kaapstad is saying that that isn't fair on us as consumers. We're paying for a concept, not a product. The reason why it's still little more than a concept is understandable and not a lot can be realistically done about, but we're still paying for it.

It doesn't really matter for consumers. You're still getting games that work aren't you? Whether it's 11 or 12 you're still getting games. You aren't paying for it at all.

It takes a long time to research and develop robust code to replace something that has had decades of driver support infastructure. The developer has to do all the work. Most developers even in AAA studios have never done this before, is certainly never something you learn through school which many software engineers learn to code. Throwing more money at the problem just because they make a lot(which is false as only a few companies out of the tens of thousands of companies do) doesn't mean it's work sooner and gets to consumers faster. A small 4 man game studio working out of a bedroom isn't going to support dx12. It's simply not viable, smart, nor financially doable. On top of having no experience writing a driver.

I think people with this mindset think this way because of abstraction. Meaning they don't know how programming works, but assume they have an idea so make opinions as to why dx12 sucks. It's a common thread amongst gamers. I have learned a whole lot over the years in school about programming, but programming basically an entire OS driver? No chance(that's basically what games are in a sense)
 
It doesn't really matter for consumers. You're still getting games that work aren't you? Whether it's 11 or 12 you're still getting games. You aren't paying for it at all.

It takes a long time to research and develop robust code to replace something that has had decades of driver support infastructure. The developer has to do all the work. Most developers even in AAA studios have never done this before, is certainly never something you learn through school which many software engineers learn to code. Throwing more money at the problem just because they make a lot(which is false as only a few companies out of the tens of thousands of companies do) doesn't mean it's work sooner and gets to consumers faster. A small 4 man game studio working out of a bedroom isn't going to support dx12. It's simply not viable, smart, nor financially doable. On top of having no experience writing a driver.

I think people with this mindset think this way because of abstraction. Meaning they don't know how programming works, but assume they have an idea so make opinions as to why dx12 sucks. It's a common thread amongst gamers. I have learned a whole lot over the years in school about programming, but programming basically an entire OS driver? No chance(that's basically what games are in a sense)

I don't know whether you're talking about me, but in fairness I don't remember suggesting game developers throw money at the situation or that I understand how programming works. I asked more questions than gave opinions. That should show I don't have much of an opinion; just a bunch of random thoughts that make sense right now and could easily be changed.

About your other point, Windows 10 cost money. My R9 Fury cost money. Dice's Vega 64 cost money. I bought an R9 Fury at a time when DX12 was advertised as the next big thing. I banked on Radeon/Freesync and DX12. Neither paid off really. I was running DX11 games on a card better suited to DX12 because I thought DX12 was coming. Now we have £1200 GPUs that require DX12 to be fully utilised. DX11 works absolutely, but it doesn't have any of the shiny new features of DX12 that were so exciting. It again goes back to the hype engine so many created. Nvidia didn't seem too fussed about it back in Maxwell/Pascal time, but now it's going to be essential in their restructuring and push towards DXR and Ray-Tracing.

You could also argue that the half-baked attempts at DX12 support have sucked the life out of certain titles and drawn much-needed funds away from other important areas. While that doesn't cost the consumer money as the price of the game is still the same, it costs other things consumers are reluctant to pay when the reward is meagre.
 
But objectively (or even subjectively) how much time is needed? When will DX12 be the standard? When will we start to see all that hype realised?
Like they said at the time, many years, for every question there(I don't remember much hype, might have been a community thing).

DX12 was announced five years ago and released three and a half years ago. The first game using it came out a few months later, AotS, and was not exactly a runaway success. Neither was BF1. For months and months even AMD users were forced to use DX11 because DX12 was unusable in that game. It's starting to look like DX12 for the desktop was a pipe dream. You yourself say that it'll take vast amounts of time and money to create an infrastructure to support it. So why does it exist? Why was it hyped? Why does GCN (especially Vega) and Turing support it better than Haswell/Pascal, two incredibly expensive architectures? It feels like DX12 was a ploy to get people to move to Windows 10 and buy new hardware. Fiji used DX12 as its centrepiece and now Fiji is basically DOA for many gamers. I think Kaapstad is saying that that isn't fair on us as consumers. We're paying for a concept, not a product. The reason why it's still little more than a concept is understandable and not a lot can be realistically done about, but we're still paying for it.
You're not paying for anything at all, no one bought into DX12, it's an API that comes with new hardware and software, it's not really of concern to consumers what API is used in either if the existing one works well, API's are tools specifically for developers & nothing else. DX12 wasn't meant to replace DX11 across the board, it's a complete shift in direction from the system of high level shaders & driver interfaces that isn't suitable for every development team and isn't designed to be, it was made clear to developers at the time; *Don't* use DX12 if you don't have a ton of resources to pour into dedicate hardware codepaths or won't gain anything from it.

Maxwell & Pascal simply didn't have the feature set for it for the most part, however they also don't use general compute compatible shaders across the board, when you get down to it they're very quirky architectures that require carefully crafted drivers to utilise properly, and don't have much additional features for compute that go unused. Turing & GCN have a much wider array of execution units & instructions, with the ability to carry these out on any SM/shader, they're more compute-oriented & general purpose architectures, there are many cases where developers may want to make use of these additional features in their games, but there are also many cases where they'd be pointless.

And BF1 worked fine with DX12 on launch for most people on the hardware that benefited from it, unless you wanted to use overlay or streaming or recording software designed for DX11 or something, though the game was abit buggy generally.

Maybe you got sucked into some internet community hype train or something but the communication about DX12 in more sensible & direct channels certainly wasn't language meant to sell hardware or OS' or something, those products generally have more notable benefits to consumers than an API support.
 
Last edited:
I think people with this mindset think this way because of abstraction. Meaning they don't know how programming works, but assume they have an idea so make opinions as to why dx12 sucks. It's a common thread amongst gamers. I have learned a whole lot over the years in school about programming, but programming basically an entire OS driver? No chance(that's basically what games are in a sense)

Wait, yeah, just this.
 
I don't know whether you're talking about me, but in fairness I don't remember suggesting game developers throw money at the situation or that I understand how programming works. I asked more questions than gave opinions. That should show I don't have much of an opinion; just a bunch of random thoughts that make sense right now and could easily be changed.

About your other point, Windows 10 cost money. My R9 Fury cost money. Dice's Vega 64 cost money. I bought an R9 Fury at a time when DX12 was advertised as the next big thing. I banked on Radeon/Freesync and DX12. Neither paid off really. I was running DX11 games on a card better suited to DX12 because I thought DX12 was coming. Now we have £1200 GPUs that require DX12 to be fully utilised. DX11 works absolutely, but it doesn't have any of the shiny new features of DX12 that were so exciting. It again goes back to the hype engine so many created. Nvidia didn't seem too fussed about it back in Maxwell/Pascal time, but now it's going to be essential in their restructuring and push towards DXR and Ray-Tracing.

You could also argue that the half-baked attempts at DX12 support have sucked the life out of certain titles and drawn much-needed funds away from other important areas. While that doesn't cost the consumer money as the price of the game is still the same, it costs other things consumers are reluctant to pay when the reward is meagre.

Only my first paragraph was in response to you. The rest was to everyone

But no you're point about hardware is irrelevant and doesn't apply. You can still use them. You still get the benefits of newer hardware. It's still faster and more efficient. It works. They were never sold on the promise. Windows didn't and still doesn't cost anybody money. You can still get it for free. Few people actually paid for it.

As for newer fancy features, it's really only relevant to now. Never was previously. Banking on APIs is never smart. Sure the promise was there but waiting on what exactly? Asynchronous Compute was the biggest thing ever. I was all for it. And it's being used extensively for consoles. That's where it mattered most. Some games used it, but it at most only gives 5-10% more performance and never promised more. That's not much more performance for many people.
What else was there? Better optimizations for lower end hardware due to less overhead and efficient pipelining? Sure, and for the most part it failed. But that's just due to R&D and lack of actual need for it.

You also as virtually every gamer fails to realize, is the fact that DX12 is 90% focused on the CPU. So to say it sucks because of a lack of GPU stuff is ignoring the fact it was always for the CPU. Only the gaming media focused on GPUs and shoved those down everyone's throat, simply because that's all they care about or understand. It was always first and foremost for the CPU, the big 3 promises the gaming media focused on: Better performance(GPU), mGPU, and Asynchronous Compute, were never the focus. Yes performance is always a focus but software developers measure performance on a ms level(millisecond in case it wasn't clear) and not FPS.
Saying mGPU is a fail is wrong, it was failing far before dx12 came out.
Asynchronous Compute delivered, but nobody cared because it wasn't enough performance uplift. Not sure what people were expecting from just essentially a high level view of it, rearranging compute and graphic tasks.
 
Last edited:
Some games used it, but it at most only gives 5-10% more performance and never promised more. That's not much more performance for many people.
What else was there? Better optimizations for lower end hardware due to less overhead and efficient pipelining? Sure, and for the most part it failed. But that's just due to R&D and lack of actual need for it.
I wouldn't say that, devices like Bulldozer APUs benefited greatly from DX12/Vulkan/Mantle, or systems with FX CPUs + GCN cards, games like Battlefield & DOOM could run perfectly fine on £100 APUs.

A major benefit of DX12 was proper multi-threading support for the CPU calls amongst other things, obviously with consoles that was particularly desirable, technically you could argue DX12 is likely the most popular single gaming API there is, given its use in the Xbone.

Also worth noting that on NVidia hardware pre-Volta/Turing didn't benefit from async compute and still doesn't have a proper implementation. There's not much point using an API when most gamers hardware can't even use it properly yet, it doesn't matter how long ago the framework for an API was created since it takes a bunch of chicken and egg cycles to get anything useful when we're just talking about a set of tools.
 
I wouldn't say that, devices like Bulldozer APUs benefited greatly from DX12/Vulkan/Mantle, or systems with FX CPUs + GCN cards, games like Battlefield & DOOM could run perfectly fine on £100 APUs.

A major benefit of DX12 was proper multi-threading support for the CPU calls amongst other things, obviously with consoles that was particularly desirable, technically you could argue DX12 is likely the most popular single gaming API there is, given its use in the Xbone.

Also worth noting that on NVidia hardware pre-Volta/Turing didn't benefit from async compute and still doesn't have a proper implementation. There's not much point using an API when most gamers hardware can't even use it properly yet, it doesn't matter how long ago the framework for an API was created since it takes a bunch of chicken and egg cycles to get anything useful when we're just talking about a set of tools.

You didn't read what I said my dude
 
So what's the big deal? Lower overhead API didn't bring a boost if your hardware could deal with that overhead? Well colour me surprised.
 
The promotional material from AMD was heavily focused on DX12 features. RX 480 Crossfire = GTX 1080 performance in Sniper Elite. Fury X 4k performance in Deus Ex in DX12 mode because of Async Compute. It wasn't just the media.
 
The promotional material from AMD was heavily focused on DX12 features. RX 480 Crossfire = GTX 1080 performance in Sniper Elite. Fury X 4k performance in Deus Ex in DX12 mode because of Async Compute. It wasn't just the media.
Nope, you can look at many press releases for the RX480, even the full-text from AMD is included here:
https://www.anandtech.com/show/10389/amd-teases-radeon-rx-480-launching-june-29th-for-199
Not one mention of DX12.
Besides, you said yourself "RX 480 Crossfire = GTX 1080 performance in Sniper Elite. " There is no Crossfire in DX12, and it was always known there never would be for consumer games, AFR/crossfire makes no sense in a low-level API. mGPU was demo'd in Ashes but mGPU != CFX & demos != games.

Here, the only mention of DirectX12 for RX480 post-/launch material was in after-launch reviews from certain reviewers
https://www.google.co.uk/search?q=r...lnt&tbs=cdr:1,cd_min:,cd_max:7/1/2016&tbm=nws

But as people have pointed out and as was stated at the time, low level APIs exist to get the most outt of limited or exotic hardware, a beast PC generally has enough overhead, particularly CPU overhead, that it won't matter.
Remember, AMD only shouted about DX12 before Zen launched. DX12's biggest reason to get pushed in the PC world was Bulldozer and I think most people not reading hypey press sites knew that from day 1.
 
Last edited:
Nope, you can look at many press releases for the RX480, even the full-text from AMD is included here:
https://www.anandtech.com/show/10389/amd-teases-radeon-rx-480-launching-june-29th-for-199
Not one mention of DX12.
Besides, you said yourself "RX 480 Crossfire = GTX 1080 performance in Sniper Elite. " There is no Crossfire in DX12, and it was always known there never would be for consumer games, AFR/crossfire makes no sense in a low-level API. mGPU was demo'd in Ashes but mGPU != CFX & demos != games.

Here, the only mention of DirectX12 for RX480 post-/launch material was in after-launch reviews from certain reviewers
https://www.google.co.uk/search?q=r...lnt&tbs=cdr:1,cd_min:,cd_max:7/1/2016&tbm=nws

But as people have pointed out and as was stated at the time, low level APIs exist to get the most outt of limited or exotic hardware, a beast PC generally has enough overhead, particularly CPU overhead, that it won't matter.
Remember, AMD only shouted about DX12 before Zen launched. DX12's biggest reason to get pushed in the PC world was Bulldozer and I think most people not reading hypey press sites knew that from day 1.

Do you not remember the dual 480's compared against the GTX 1080? And the word Crossfire was just a phrase to explain two graphics cards.

Maybe I don't remember things as well as I thought. All I remember was DX12 this, DX12 that. Async this, Async that.

Interview with a head of marketing for Radeon posted 2.5 years ago:
http://www.redgamingtech.com/interv...ock-polaris-graphics-technology-vr-dx12-more/

"The DX12 adoption rate is on track to be the fastest ever. We know that is true based on our discussions with software partners, comparing what they’re telling us to the historical record for DX9 or DX11. The performance for AMD products has been exceptional: Hitman, Ashes of the Singularity, Quantum Break… all of these DX12 games have produced runaway wins for Radeon GPUs, and asynchronous compute only widens the gap."
 
The dual 480s against GTX1080 was a comparison made a fair bit but the only case of that in DX12 ever discussed was an Ashes demo & that wasn't really a consumer conference/piece of material, it was a tech demo shown once at a developer conference iirc.

But to be fair, DX12 has been adopted at similar rates as many previous instalments, DX9 launched in 2002 but wasn't really common till around 07 and is still used in plenty of games now, DX11 became available in 2009 but it wasn't until around 2012/13 that games started having full implementations that used the features fully & were meaningful improvements on their DX9 engines.
 
Do you not remember the dual 480's compared against the GTX 1080? And the word Crossfire was just a phrase to explain two graphics cards.

Maybe I don't remember things as well as I thought. All I remember was DX12 this, DX12 that. Async this, Async that.

Interview with a head of marketing for Radeon posted 2.5 years ago:
http://www.redgamingtech.com/interv...ock-polaris-graphics-technology-vr-dx12-more/

"The DX12 adoption rate is on track to be the fastest ever. We know that is true based on our discussions with software partners, comparing what they’re telling us to the historical record for DX9 or DX11. The performance for AMD products has been exceptional: Hitman, Ashes of the Singularity, Quantum Break… all of these DX12 games have produced runaway wins for Radeon GPUs, and asynchronous compute only widens the gap."

See this is where you read into wrong.
Dx12 having the highest adoption rate doesn't mean for games. It means for studios who have started looking into it. You also have to forgive AMD for the marketing talk, of course they will hype it up. Anyway it is easy to see that DX12 does indeed have the highest adoption rate. It's used for Xbox for example. Never had a PC API run on a console before. So they never stopped working on it it's just working without everybody advertising for it. So many studios working on it, and then it's also now on at least one console. That's impressive.

Older API releases also take forever to adopt. It's not just exclusively a dx12 problem
 
Back
Top