Thinking about how we have reached the point where a 4K "native quality" output is coming from 1080p internal render resolution either via DLSS 2 Perf mode or looking forward to UE5's Temporal Super Resolution & (we assume) FidelityFX Super Resolution: blog.shivoa.net/2021/05/fewer-

Already slightly concerned with the route AMD have chosen for upscaling & await head to heads between UE5's technique & FSR when they're both public. I hope I'm wrong but, as others have noted, the early information shows clear limitations (possibly putting this closer to UE4's upscaler for quality trade-offs). We're no longer in 2017 where any alternative to various checkerboarding techniques would be a major boon on PC.
anandtech.com/show/16723/amd-f

@shivoa I think that this approach could help them with adoption though. It doesn't seem to require much from the rendering pipeline, which should make it easier to integrate. I'm pretty sure that Godot will at least look into integrating it, as it could mean basically free performance for close to zero work for the developers.

@ignaloidas Totally agree, if you don't generate motion vectors then this will do something for you while TAA stuff is off the table (& presumably looks better than AMD's existing FidelityFX CAS Upscaler tech or just MLAA then basic upscale). I just wonder how many competitive engines are going to be able to plug this in & say they're of the same visual generation (at similar framerates) as engines that are now iterating into advanced TAA solutions or have DLSS magic behind their output.

@shivoa I think it's still to be said if this can work with motion vectors. Even if it doesn't require them, doesn't mean that you can't layer it with something that works using them. DLSS takes a fair bit of work on both developer and NVIDIA sides (training on those x64 frames isn't an easy thing to do) and is a quite significant barrier to entry, that only large studios will go through, while this has potential to be useable by dozens of smaller studios.

@ignaloidas The move to a generic trained DLSS 2.x feels like it's the point where it's attainable (it's less & very overlapping work to integrate vs building your own TAA solution for a custom engine but more work than adding MLAA). Yes, it's far from free but it's doable in a custom engine that is working on enabling TAA (and aiming to compete with Unity or Unreal visuals where it's a tick-box plugin away).

@shivoa I think that DLSS might be quite dependent on artistic direction, which could mean that the games would need to get quite same-y to get good results across the board, which isn't going to happen.

@ignaloidas So this is something I noted in my blog post. If you look at DLSS working on things it wasn't trained for (like nearest texture filtering) then you can glimpse where it doesn't replicate the native rendering but the results are still visually pleasing. mastodon.gamedev.place/@shivoa

@shivoa Even if some case wasn't included, DLSS can still work decently, yes. But the problem is that neural networks can be quite sensitive to general color palette, and given an image that may include the same objects that it trained on, but with different lighting temperature or with some effect sharers, the results may get unpredictable. Neural networks have little ability to work outside the dataset they were trained on, and with games that can be very apparent.

Follow

@ignaloidas With several dozen games (and a public Unreal Engine plugin you can test it on if you've got an RTX card using whatever scene composition you like) then I'd say if you're thinking of DLSS 2.x like DLSS 1 (which was very much as you describe) then it's time to re-evaluate the tech. It really does seem like "smarter TAA" not like other "AI dreaming" upscalers.

@shivoa Ah, I missed the news that DLSS is now generic. That's pretty cool how they managed to do that. But while motion vectors do help and in a sense incorporate TAA into DLSS, I do think that good results should be achievable even without temporal data.

@ignaloidas I just don't see how you can sample once per output quad (ie 1080p internal res, 4K output) and offer good results that retain detail + suppress aliasing without temporal help. Even inside a static textured polygon, you need the (jittered) samples from previous frames to get up to a sufficient sampling rate of the texels needed for the final output res.
The 45 min GTC talk for DLSS 2 goes into some good fundamentals (ends on key implementation notes). youtu.be/d5knHzv0IQE

Sign in to participate in the conversation
Gamedev Mastodon

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!