This was actually the sub-headline of the article but I thought was the more important party of the article.
Speaking with developers and artists at studios that have agreed to DLSS 5, including CAPCOM and Ubisoft, Insider Gaming was told that the DLSS 5 tech was revealed to them at the same time as everyone else.
“We found out at the same time as the public,” said one Ubisoft developer.
Developers at CAPCOM tell Insider Gaming that the announcement and the publisher’s involvement were particularly shocking, as CAPCOM has previously been historically very “anti-AI” with projects such as Resident Evil Requiem and other unannounced projects in development. Some at the publisher fear that the DLSS 5 announcement could prompt a change in the publisher’s view on generative AI and its implementation in its games.



From my understanding, it may be possible to work around some of this, since the program is meant to hook into the game in a number of different ways. Its very possible that an “importance” mask could be added as in input, for example. This wouldn’t fix everything, but would still give a way to separate game elements from environmental details.
That said, theres been so much focus on how it looks. IMO, its completely overblown, especially when all of this needs to be manually configued on a game-by-game basis. Devs can tweak the settings to their own preferences, and make things more or less extreme.
The part thats much more worthwhile of mockery is the fact that they’re demoing a consumer product on professional grade hardware, during a hardware shortage. They couldn’t even get the demo working on a high-end gaming PC, and they think this tech is worth advertising? That is the funny part of all this.
It’s wild that every defense of this bs is “Just have devs spend even more time finetuning for this.” Yes, let’s double (or more) the workload of artists and programmers that are already overworked and crunched beyond reason, all for a “feature” that looks like garbage in its showcase demo and that’s so resource intensive that very few users will be able to utilize it, if they even want to.
Its more an argument against the, “artisit’s intent” and “disrupting gameplay” points.
Do you have any evidence for this? Given whats been shown, this seems relatively easy to implement on the game dev side.
Even if implementing it turns out to be trivial, testing art assets for quality and consistency will be a nightmare. Especially if the underlying generative AI isn’t deterministic.
Even if implementing it is trivial, it’s also still “one more thing”. Just like optimizing for the Steam Deck, considering features that might not be on the lowest-tier console release, accessibility requirements, and dozens of other checklist items that might go further and further down the list. Worse, if DLSS ends up interfering with those other checklist items after it’s already been verified.
Yes, but what the tech costs to implement has a huge impact on what it is, and how (or if) its ever implemented. So far as I can tell from my own research, the original commenter was lying, which makes sense. If it actually increased dev time that much, even Nvidia wouldn’t be stupid enough to try and sell it. “AI graphics costs $10 million dollars to implement, and has negligible impact on sales.” would not look good for their bubble.
Yes, depending on implementation details. I mean, its never going to be completely consistant, but I don’t expect these companies to mind a little brand damage if they get short-term boost in invest.
I’m more thinking that as it stands, the hardware requirements make it DOA for users. They’re saying they’ll improve it, although I have my doubts. That said, even if no one can run it, it may be popular among publishers for screenshots and marketing. On the other hand, if it does actually double dev costs, then it’ll be DOA even for corporate use.