Worth remembering that DLSS Performance mode is reconstructing from a buffer of 1080p images, depth, & motion vectors. When compared with 4K native, nearest filtered textures should cause differences up close. But part of the sharpen + AA + tweak mipmap transitions of the reconstruction to compete with native res is using AI learning to mask that. It probably wasn't trained on many nearest filtered textures (or has any way to flip it to a model trained just for that).
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!