On Friday, AMD Chief Architect of Gaming Solutions Frank Azor said that if Bethesda wanted to implement rival Nvidia's DLSS upscaling technology in its upcoming RPG Starfield,...
AMD denies blocking Bethesda from adding DLSS to Starfield | Starfield DLSS mod locked behind a paywall::undefined
I support game producers being free to implement whatever tech they choose to work with. That said, I find it kind of stupid to side with NVIDIA promoting their exclusive DLSS over an open standard when the quality difference is only noticeable in side-by-side comparisons.
The game has both DLSS and FSR? Great. The game only has FSR? Well, that’s more inclusive than DLSS-only. Everybody benefits with open standards.
Yeah, and Nvidia is pretty guilty for time and time again trying to lock people into proprietary solutions, while AMD introduces public standards like (Freesync comes to mind)
I don’t need a side by side to know the difference. DLSS is better in performance hands down. To get the same performance with FSR I have to sacrifice other settings if I can get there at all.
I got a 3080 and I would never want to use DLSS 3 anyways. Keep that stupid ass fake frame generation away. I can put up with upscaling since it’s at least a true rendered frame, but that’s pretty much where I draw the line. Fake frames might make it feel smooth, but I’m not into this hobby for the feels.
Nobody wants exclusion of any technology, thats the entire point. Especially when its been shown repeatedly that once you implement one of the 3 (fsr, dlss, xess) techs, the other 2 take almost no effort to add in as well. So little effort that modders have managed to shove them in to games that exclude them for whatever reason, sometimes achieving it in a matter of hours.
All that said… Dlss is definitely better quality than fsr. “Some people might tell” is an understatement.
Your 3080 cant run frame generation because it wouldn’t improve your framerate with that gpu architecture.
Just like software dlss wont improve framerate on a 1080.
Nvidia isnt some boogeyman holding back these techs because they just want to force people to buy new cards. They are definitely making tech that only works on the newest cards to try and get more sales, dont get me wrong, but its not arbitrary.
If small devs are expected to support every platform day one that increases the barrier to entry.
A world where small teams start their release on one or two platform they find advantageous and then port their successful titles to other platforms after is probably safest for them and offers the most product diversity for consumers.
I’m not a fan of using the same word to describe two very different kinds of exclusively.
Exclusivity due to platform contracts (i.e., Sony paying a developer to keep a game exclusive to PlayStation), is not the same as exclusivity you described in your comment.
and then port their successful titles to other platforms
Well, then they’re not exclusives, are they? I get the point to speed up time to market, but I’m questioning the benefit of having “lifetime exclusives”, or anything beyond 1 year, honestly.
The implication is of course that less successful titles will not be ported either because the company runs out of money or feels they are better off working on their next title than investing more resources on porting a middling title to a second choice platform.
The problem is its artificial performance. Frame generation that makes your fps counter have a bigger number isn’t the same thing as your GPU being able to sustain that bigger number through actual performance.
The question is do I care? Yes these are technically not real frames but if I dont see the difference why does it matter. I personally don’t care as long as the frames look good and I have enough of them.
Yeah and that’s what Nvidia is banking on… Literally. People continuing to buy Nvidia GPUs under the idea that it’s a more powerful experience while using tricks and locking features behind closed source BS drives up prices and continues the consumer driven system that screws everyone.
No but that’s not really a concern. Unless the frame generation is signicantly effecting the real frame rate you will get smoother motion with similar latency as without it. It’s probably not ideal for competitive games where you want motion to be 1:1 but it’s probably good enough for more casual ones.
I’m reposting my old comment regarding DLSS frame-gen
If it can help me maintain a more stable FPS that would be a boon. If I’m playing a game with unstable frame rates with a lot of stuttering I usually get a headache after one hour. So if frame gen can help my PC run games at a more stable frame rate, then I’m all for it. The first gen implementation of it may be shitty. But after a couple of generations it can be good.
Look at where DLSS is now, DLSS is objectively shit but since DLSS 2 in some cases it can improve image quality. I game on a 1080p 380hz screen, and when I’m playing games with upscaling like DLSS or FSR, I’ll run the game at 4k and then run the upscaler on performance mode which is basically rendering the game at 1080p. The results are much better than just running native 1080p.
That being said, having a more consistent frame rate will make your experience better. Not having an input lag difference won’t be a problem in single player games, as the difference will be under 100 ms anyway (F1 drivers have 200-300 ms reaction time) so it won’t make too much of a difference.
I was more upset about this because I didn’t realize fsr was supported on Nvidia cards. I always just used dlss in my 2080S. I do wish Nvidia would be more open with their technology but it’s probably why they lead in gpus. Starting at least back with hairworks and physx that I can remember, it made buying Nvidia a better product.
Its actually the opposite, the gap between DLSS and FSR is bigger at lower resolutions than it is at 4k+. Thats because DLSS can still rely on ai model data that a lower resolution image cannot supply vs a higher base resolution where that data is likely more available. Its why when you have side by side tests, its less noticable as you get to 4k. Definitely one of the tests HUB does when conparing the scaling algorithms side by side.
I support game producers being free to implement whatever tech they choose to work with. That said, I find it kind of stupid to side with NVIDIA promoting their exclusive DLSS over an open standard when the quality difference is only noticeable in side-by-side comparisons.
The game has both DLSS and FSR? Great. The game only has FSR? Well, that’s more inclusive than DLSS-only. Everybody benefits with open standards.
Yeah, and Nvidia is pretty guilty for time and time again trying to lock people into proprietary solutions, while AMD introduces public standards like (Freesync comes to mind)
I don’t need a side by side to know the difference. DLSS is better in performance hands down. To get the same performance with FSR I have to sacrifice other settings if I can get there at all.
yeah, some people might tell, I don’t think it’s worth the trade-off of excluding a large part of the market.
NVIDIA doesn’t even respect their own user base. I have a 3080 and can’t use DLSS 3. I’ll keep supporting open technologies.
I got a 3080 and I would never want to use DLSS 3 anyways. Keep that stupid ass fake frame generation away. I can put up with upscaling since it’s at least a true rendered frame, but that’s pretty much where I draw the line. Fake frames might make it feel smooth, but I’m not into this hobby for the feels.
that’s fair, I’m absolutely in it for the feels haha
I just play to have a good time
No one is saying FSR should be excluded.
Though if there’s only going to be one hardware agnostic upscaler then I’d rather it be XESS than FSR.
afaik it has the same problem of DLSS of being exclusive though
Nobody wants exclusion of any technology, thats the entire point. Especially when its been shown repeatedly that once you implement one of the 3 (fsr, dlss, xess) techs, the other 2 take almost no effort to add in as well. So little effort that modders have managed to shove them in to games that exclude them for whatever reason, sometimes achieving it in a matter of hours.
All that said… Dlss is definitely better quality than fsr. “Some people might tell” is an understatement.
Your 3080 cant run frame generation because it wouldn’t improve your framerate with that gpu architecture. Just like software dlss wont improve framerate on a 1080.
Nvidia isnt some boogeyman holding back these techs because they just want to force people to buy new cards. They are definitely making tech that only works on the newest cards to try and get more sales, dont get me wrong, but its not arbitrary.
DLSS 3.5 on the 20 and 30 series should be interesting, though
Minus the frame generation for the 2-3000 series.
I haven’t built an all AMD rig since the early 00’s. It’s time.
No exclusives at all are as bad for the gamer economy as only exclusives.
I’m interested in the next version of FSR, it’s rumored to include frame generation.
Can you elaborate on that? I don’t see a clear benefit of exclusives to the user base or industry in general, only to those involved.
If small devs are expected to support every platform day one that increases the barrier to entry.
A world where small teams start their release on one or two platform they find advantageous and then port their successful titles to other platforms after is probably safest for them and offers the most product diversity for consumers.
I’m not a fan of using the same word to describe two very different kinds of exclusively.
Exclusivity due to platform contracts (i.e., Sony paying a developer to keep a game exclusive to PlayStation), is not the same as exclusivity you described in your comment.
Well, then they’re not exclusives, are they? I get the point to speed up time to market, but I’m questioning the benefit of having “lifetime exclusives”, or anything beyond 1 year, honestly.
The implication is of course that less successful titles will not be ported either because the company runs out of money or feels they are better off working on their next title than investing more resources on porting a middling title to a second choice platform.
The problem is its artificial performance. Frame generation that makes your fps counter have a bigger number isn’t the same thing as your GPU being able to sustain that bigger number through actual performance.
The question is do I care? Yes these are technically not real frames but if I dont see the difference why does it matter. I personally don’t care as long as the frames look good and I have enough of them.
Yeah and that’s what Nvidia is banking on… Literally. People continuing to buy Nvidia GPUs under the idea that it’s a more powerful experience while using tricks and locking features behind closed source BS drives up prices and continues the consumer driven system that screws everyone.
Do fake frames process inputs?
No but that’s not really a concern. Unless the frame generation is signicantly effecting the real frame rate you will get smoother motion with similar latency as without it. It’s probably not ideal for competitive games where you want motion to be 1:1 but it’s probably good enough for more casual ones.
It’s hard for old peeps like me to let frames go. Wolf:ET on a 333mhz compaq was hell and I’ve been chasing frames since.
I’m reposting my old comment regarding DLSS frame-gen
That being said, having a more consistent frame rate will make your experience better. Not having an input lag difference won’t be a problem in single player games, as the difference will be under 100 ms anyway (F1 drivers have 200-300 ms reaction time) so it won’t make too much of a difference.
I was more upset about this because I didn’t realize fsr was supported on Nvidia cards. I always just used dlss in my 2080S. I do wish Nvidia would be more open with their technology but it’s probably why they lead in gpus. Starting at least back with hairworks and physx that I can remember, it made buying Nvidia a better product.
AMD has had 20 years to get good and calling dlss barely noticable is a coping mechanism. FSR ain’t it, especially at 4k.
Its actually the opposite, the gap between DLSS and FSR is bigger at lower resolutions than it is at 4k+. Thats because DLSS can still rely on ai model data that a lower resolution image cannot supply vs a higher base resolution where that data is likely more available. Its why when you have side by side tests, its less noticable as you get to 4k. Definitely one of the tests HUB does when conparing the scaling algorithms side by side.