Nvidia’s DLSS and AMD’s FSR (FidelityFX Super Resolution) are two techniques that promise games at higher resolutions without demanding so much from the graphics card. The idea is to allow you to play the latest games with maximum graphics without compromising on performance. Although they aim to achieve the same end, the two solutions are quite different in terms of approach and support. While DLSS uses Artificial Intelligence and can deliver better results, AMD’s FSR works on all brands of graphics cards – including Nvidia’s own. Below, learn all about the two technologies, and understand their advantages and disadvantages.
Internal resolution, reconstruction, and upscaling
To understand why solutions like FidelityFX Super Resolution (FSR) and Deep Learning Super Sampling (DLSS) are important, you need to realize that graphics in games depend not only on aspects such as resolution but also on the impact that this high-resolution video will have on performance.
Suppose you want to enjoy a very demanding recent game on a 4K monitor and you want to see graphics making the most of the screen’s native resolution. This means that your computer’s graphics card needs to generate an image of 3840 x 2160 pixels (or a total of 8,294,400 pixels). But since games only make sense in motion, these more than 8 million pixels need to be updated several times per second – at least 60 times (60 fps) for good performance or 30 times (30 fps) for lower performance.
We are using 4K here as an example because this is a high resolution that even high-end graphics cards can struggle to deliver consistently. But the same reasoning goes for any final resolution that is too high for the available graphics card.
Creating a 3840 x 2160 pixel display every 16 milliseconds – to be able to display 60 new frames within one second – is a demanding task, especially considering visual effects such as particles, smoke and fog, physics, texture mapping, and smoothing, among other examples. These are also the task of the graphics card and end up weighing on the bill, making the goal of a consistently high resolution in performance something that is not always feasible, even on good quality graphics cards.
FSR and DLSS are ways to alleviate this burden on the graphics card, reducing the game’s resolution without losing so much quality and allowing for higher performance in the end. Playing the game at a “low” resolution, such as Full HD (1920 x 1080 pixels), with graphics at most, 60 fps performance, the reconstructed image becomes a viable way to ensure 4K at 60 FPS, even on GPUs that would not be capable of that level of performance natively.
Upscaling and image reconstruction
If 4K is too much for the video card, then it may be possible to synthesize the game graphics at a lower internal resolution, easing the workload for the GPU, but displaying the image in 4K with some loss of quality. It is worth noting that this will only be noticeable for those who are very picky and stick to the smallest details.
This is the idea behind two solutions that have been used for some years now, especially in consoles. Instead of completely sacrificing the graphics card trying to achieve an unfeasible 4K resolution, developers have started to run their games in lower internal resolution, within ranges such as 1440p or 1800p, using specific upscaling techniques to display an image close to 4K.
The performance saved by rendering the game internally at lower resolutions ends up allowing stable performance with fancy graphics effects. More crude techniques for this process of running the game in an “artificial” final resolution extrapolated from the lower internal one are called upscaling. They are more rudimentary and generally do not apply more sophisticated processes to ensure that the final high-resolution image is free of artifacts and defects originating in the “bloating” process.
Image reconstruction techniques, on the other hand, are more sophisticated and involve processes of approximation and processing of the frame so that the final resolution, also artificial, is of good quality and does not have as many artifacts or defects linked to the artificial increase of the internal resolution. The good quality uses of this approach can even create “fake 4K” graphics that are difficult to distinguish from native 4K.
FSR and DLSS: How they work
FSR and DLSS are solutions created by AMD and Nvidia to optimize this process of generating a final image with a higher resolution than the initial one. The idea is that the graphics card itself and its drivers offer these techniques, simplifying developers’ lives and allowing even higher quality when generating higher resolutions from lower initial resolutions.
However, the two approaches are quite different from each other. AMD’s FSR is a reconstruction technique that uses algorithms to transform an internal resolution frame into the final one in a process that is more prone to image quality loss – at least in the demonstrations so far.
The “problems” of FSR are associated with textures and the technique’s inability to preserve fine details, such as strands of hair or surfaces with some kind of granularity, such as the rough appearance of a stone or very detailed surface.
It is worth remembering that, in any case, this apparent loss of quality is difficult to perceive with moving graphics, and is more an observation of the differences between the two technologies. Ultimately, some loss of quality will always be inevitable in image reconstruction processes for higher resolutions.
Nvidia’s DLSS goes the other way. Instead of more crude processing on top of each game frame, the technique uses Artificial Intelligence that takes care of approximating what should be the ideal final look of the image at a higher resolution.
For this to be possible, the developer needs to explicitly support DLSS in their game and provide Nvidia with a set of images that train the AI about the game. Machine learning ensures that the DLSS is more accurate and has higher-quality images. After testing, expert channel Digital Foundry even considered that a game like Control, running with DLSS, can have better graphics than with native resolution.
Differences between FSR and DLSS
In short, one works more deterministically, performing a simpler image reconstruction process, while the other runs on AI and can produce superior results. But there is another big difference: Nvidia’s DLSS is a very specific feature that relies on its hardware. To work, the technology needs its processing cores for AI, present in the brand’s GeForce RTX. Nvidia’s MX or GTX 10 cards, as well as AMD’s Radeon and Intel’s GPUs do not support DLSS.
Besides being compatible with a small portion of GPUs available on the market, Nvidia’s DLSS needs to be supported by the developers of each game. Although it is recognized as an important technology, the relatively low amount of compatible hardware on the market may discourage more developers from securing DLSS support at this time.
AMD’s FSR also needs developers to introduce support for the technique into the internal code of their games, but the trend is for the technology to be more widely adopted, since, unlike DLSS, FidelityFX works with any graphics card from AMD, Intel, and even rival Nvidia. In addition to PCs, Sony and Microsoft consoles, all equipped with AMD graphics processors, are also supported.
Is DLSS better?
In results and performance, yes. DLSS is more accurate and generates higher-quality final images where artifacts and defects that originated in the conversion process from the game’s internal resolution to a higher final resolution are not as apparent or simply not noticeable.
AMD’s FSR applies a brute-force solution that is not as sophisticated and can generate some loss of detail in the result. But, it is worth remembering that the feature is still in the first generation – the first version of DLSS was no big deal until DLSS 2.0 was released – and, unlike rival technology, it works on any graphics card.