And/or consumers insisting on playing in 4K because “big number” even though fill rate is a huge issue with modern games and you can barely tell the difference on most setups. Which would not be so bad if they also didn’t want ever increasing graphical fidelity and 120+ fps on top of that
In my opinion, the fidelity is getting worse than what we had 10 or 20 years ago. Because now we have noise, pop-in, and the temporal smearing because of proprietary TAA and TSA. Example being Payday 3 and this new Justice League or Batman game where you play with the four characters, Which I couldn’t bother to remember, Because everything about the game is way worse than the Arkham Knight game, which almost is 10 years old by now.
4k is absolutely an upgrade over 1440p. I have two of them (an LCD and an OLED) and I absolutely love them in every game I play. I will admit that I’m in the super minority and because of my work history I’ve spent a lot of time looking at a lot of displays so I’m more sensitive to various artifacts than the normal person. And in games I always prefer looks over resolution, it needs to drop down to like 40fps or lower for me to start changing settings.
Basically, it was worth it for me but probably won’t be for you. OLED is a significantly actual upgrade. You should get an OLED it’ll change your life.
I’ll take DLSS over any other AA solution any day.
We no longer use forward renderers, AA either looks like ass or comes with a massive performance cost, and it can’t fix noise from foliage, alphas, smoke, etc. DLSS fixes all three issues at once.
Well Half-Life Alyx uses forward rendering and has a brilliant MSAA implementation. It is optimised because it needs to be. You cannot have this thing chugging along with 30Hz at full HD. You need 4K or more running at 90Hz or more. So they invested a good amount of time into making sure it functions properly before releasing it.
Also, foliage really doesn’t need to be fixed, if it is done properly. Example, 20 year old games like Halo 3 or the Crysis games.
I take issue with modern games because why the hell are they forgetting lessons of the past? Crysis and Halo 3 for example are 20 years old and they have better looking foliage than most modern games because they know what to do to avoid pop-in and noise. Yes, modern games have more foliage, because more VRAM, but older games have better looking foliage, due to the lack of wonky artifacts, in my opinion. And also, the proprietary TAA implementations, or TSR implementations, in my experience, add a ton of input latency, which makes the game feel worse. MSAA, because it uses geometry information to build AA, enhances image quality significantly and gives a better looking and more coherent picture than any other implementation of anti-aliasing, including proprietary TSR. Also, MSAA isn’t my religion, I realise that there are some aspects where TAA and TSR can be useful, but problem is, in modern games it gets abused because devs can then say “we’ll just do the absolute minimum, make sure the game executes on hardware at HD 30 Hz, and then we’ll just let the magic TSR and frame generation handle the rest”.
Well, the problem with MSAA is that it needs to have good geometry in the first place if quad overdraw is complete shit because no one bothered to make tessellation or proper LOD models and let just some automatic tool handle everything without any supervision, then yes, it will be horrible. If devs say, “it makes my geometry timing horrible”, then we already know that their geometries are utter rubbish.
Also a brilliant example of why I’m bothered by that is Payday 3 because it looks like a late PS3 game and runs like complete trash and has a massive CPU bottleneck, no matter what you do, even if you doctor around with the engine settings themselves.
Okay then, but it still works. It is still hard to claim that Half-Life Alyx runs bad or looks bad. I can only judge from my perspective as a customer. Why do we use these weird, wonky, hacky solutions for deferred rendering if the other one can look just as good, run as good, but doesn’t need any of these workarounds?
I didn’t claim it doesn’t work. I claimed there’s a reason out of hundreds of releases, you have a singular example of a forward renderer.
Which means TAA will keep being a problem, so my remark that DLSS is miles ahead applies to pretty much all games, even if once in a blue moon you find an exception.
Yes, I have. It’s also crap. The super agressive softening makes you feel like you are using a myopic camera. You could argue it’s poor implementation by developers, but it makes no difference to me.
Not sure why most games cant/dont do this, but i’ve seen Minecraft shaders use temporal upscaling exclusively on the clouds, reflections, and shadows. while using fxaa for the rest of the image.
Depends on the rendering engine architecture. If it processes stuff in layers already you can work with that more easily, same if you can insert rules for stuff like different shaders for different object types.
If you’re dealing with a game where the rendering engine can’t do that it will be very complex regardless of how much source code you have.
Honestly I couldn’t care less, because DLSS/FSR looks better than native with AA at this point. It’s so good, that I even turn it on in games that I don’t need to.
Quality comparable to supersampling, and I get a FPS boost too? Sign me the fuck up. It’s like magic.
Well, it’s subtle, but it’s still there in my experience, about 2ms. Which is bad if you’re already at the refresh rate of your monitor and you enable it, you’ll get 2ms of additional input latency, but if you are getting lower fps than your refresh rate, then you can cancel out the effect, because you’re getting more fps and hit your refresh rate. In my experience, because I’m very sensitive to that.
Reminder: Temporal, proprietary upscalers are only made mandatory by devs, that actively refuse to make a properly functioning product.
Reminder: Most devs actually care about the things they make. This is a management/timeline problem, not a developer one.
Well, I should have clarified by devs, I mean the entire companies, not the individuals. It’s a collective problem, not an individual one.
And/or consumers insisting on playing in 4K because “big number” even though fill rate is a huge issue with modern games and you can barely tell the difference on most setups. Which would not be so bad if they also didn’t want ever increasing graphical fidelity and 120+ fps on top of that
In my opinion, the fidelity is getting worse than what we had 10 or 20 years ago. Because now we have noise, pop-in, and the temporal smearing because of proprietary TAA and TSA. Example being Payday 3 and this new Justice League or Batman game where you play with the four characters, Which I couldn’t bother to remember, Because everything about the game is way worse than the Arkham Knight game, which almost is 10 years old by now.
4k is absolutely an upgrade over 1440p. I have two of them (an LCD and an OLED) and I absolutely love them in every game I play. I will admit that I’m in the super minority and because of my work history I’ve spent a lot of time looking at a lot of displays so I’m more sensitive to various artifacts than the normal person. And in games I always prefer looks over resolution, it needs to drop down to like 40fps or lower for me to start changing settings.
Basically, it was worth it for me but probably won’t be for you. OLED is a significantly actual upgrade. You should get an OLED it’ll change your life.
Let’s not forget Nvidia created DLSS and Raytracing and directly helped devs integrate them into their games to create demand for their newer cards.
Yeah, they laid out the bait and got them hook, line and sinker.
I’ll take DLSS over any other AA solution any day.
We no longer use forward renderers, AA either looks like ass or comes with a massive performance cost, and it can’t fix noise from foliage, alphas, smoke, etc. DLSS fixes all three issues at once.
Well Half-Life Alyx uses forward rendering and has a brilliant MSAA implementation. It is optimised because it needs to be. You cannot have this thing chugging along with 30Hz at full HD. You need 4K or more running at 90Hz or more. So they invested a good amount of time into making sure it functions properly before releasing it.
Also, foliage really doesn’t need to be fixed, if it is done properly. Example, 20 year old games like Halo 3 or the Crysis games.
I take issue with modern games because why the hell are they forgetting lessons of the past? Crysis and Halo 3 for example are 20 years old and they have better looking foliage than most modern games because they know what to do to avoid pop-in and noise. Yes, modern games have more foliage, because more VRAM, but older games have better looking foliage, due to the lack of wonky artifacts, in my opinion. And also, the proprietary TAA implementations, or TSR implementations, in my experience, add a ton of input latency, which makes the game feel worse. MSAA, because it uses geometry information to build AA, enhances image quality significantly and gives a better looking and more coherent picture than any other implementation of anti-aliasing, including proprietary TSR. Also, MSAA isn’t my religion, I realise that there are some aspects where TAA and TSR can be useful, but problem is, in modern games it gets abused because devs can then say “we’ll just do the absolute minimum, make sure the game executes on hardware at HD 30 Hz, and then we’ll just let the magic TSR and frame generation handle the rest”.
Well, the problem with MSAA is that it needs to have good geometry in the first place if quad overdraw is complete shit because no one bothered to make tessellation or proper LOD models and let just some automatic tool handle everything without any supervision, then yes, it will be horrible. If devs say, “it makes my geometry timing horrible”, then we already know that their geometries are utter rubbish.
Also a brilliant example of why I’m bothered by that is Payday 3 because it looks like a late PS3 game and runs like complete trash and has a massive CPU bottleneck, no matter what you do, even if you doctor around with the engine settings themselves.
There’s a reason you had to fish for an exception to find a modern game with a forward rendering engine.
Okay then, but it still works. It is still hard to claim that Half-Life Alyx runs bad or looks bad. I can only judge from my perspective as a customer. Why do we use these weird, wonky, hacky solutions for deferred rendering if the other one can look just as good, run as good, but doesn’t need any of these workarounds?
I didn’t claim it doesn’t work. I claimed there’s a reason out of hundreds of releases, you have a singular example of a forward renderer.
Which means TAA will keep being a problem, so my remark that DLSS is miles ahead applies to pretty much all games, even if once in a blue moon you find an exception.
FWIW it’s more than an exception IMHO it’s one of the very best game I played in my life. It’s more than a game, it’s an experience. I was in City 17.
Easy to not have artifacting when everything is a big smudge.
Have you used DLSS or are you extrapolating FSR 1080p and believing it looks the same?
Yes, I have. It’s also crap. The super agressive softening makes you feel like you are using a myopic camera. You could argue it’s poor implementation by developers, but it makes no difference to me.
They fixed it in DLSS 4.
TAA is ass, but don’t look in the other ones with it.
Not sure why most games cant/dont do this, but i’ve seen Minecraft shaders use temporal upscaling exclusively on the clouds, reflections, and shadows. while using fxaa for the rest of the image.
Because you need to dig into the rendering engine to do that, and if you didn’t build it yourself you might not be able to do that easily
Which would be easier if you were a dev making your own game than if you were making a mod for an existing one no?
Depends on the rendering engine architecture. If it processes stuff in layers already you can work with that more easily, same if you can insert rules for stuff like different shaders for different object types.
If you’re dealing with a game where the rendering engine can’t do that it will be very complex regardless of how much source code you have.
Wow, I didn’t know that, that’s genuinely cool.
Honestly I couldn’t care less, because DLSS/FSR looks better than native with AA at this point. It’s so good, that I even turn it on in games that I don’t need to.
Quality comparable to supersampling, and I get a FPS boost too? Sign me the fuck up. It’s like magic.
IMO, I dislike them because in my experience they add input latency. But well, horses for courses.
Frame Generation adds input lag, but I haven’t heard of any upscaling algorithms causing issues.
Well, it’s subtle, but it’s still there in my experience, about 2ms. Which is bad if you’re already at the refresh rate of your monitor and you enable it, you’ll get 2ms of additional input latency, but if you are getting lower fps than your refresh rate, then you can cancel out the effect, because you’re getting more fps and hit your refresh rate. In my experience, because I’m very sensitive to that.
TAA is garbage. Devs are using it as a crutch too. Look up Threat Interactive on YouTube.