Quick note before I go into any details: I did not find a solution for this problem, unfortunately. I’ll be explaining what happened and show frame time graphs as proof.
So, with that out of the way, let’s get into it. I’m certainly not the only one with this issue. If you employ the search engine of your liking you will find many threads covering that topic (like here and here and here and here and so on). Some managed to get it working, some did not. I’m obviously in the latter category.
What happens? From what I found in my research it seems like the RX 5700 XT GPU aggressively tries to save energy if it is not fully utilized. If you run MSI’s Afterburner or any other monitoring software, then you’ll see the GPU load and frequency being all over the place. In general, this is a good thing – if it does not affect perceived performance. And this is where it fell apart for me.
My system specs:
- Ryzen 5 2600
- Sapphire Pulse RX 570 8GB to be replaced by PowerColor Red Dragon RX 5700 XT
- 16GB 3000MHz CL15 RAM
Clearly, the CPU is the bottleneck, I know that. The GPU was chosen to be able to gain additional performance in a CPU upgrade later in 2020. In an ideal world, the 5700 XT is around twice as fast as the RX 570 as shown in this benchmark graph from the reputable German technology site Computerbase. This ideal world is a 9900K.
I only have four games installed now and one of them did manage that jump. It’s the Division 2 in DX12 mode. In this game I saw a boost from 48 fps at 30% CPU utilization to 102 fps at 73% CPU utilization at Ultra settings. However, I’m not playing this game right now. It’s Assassin’s Creed Odyssey and Far Cry 5 (and I just finished Ghost Recon: Wildlands which I will also use to verify improvements in key areas where performance was atrocious with the RX 570).
To make it short: although FPS were higher in all those games, but not as spectacular as in Division 2, perceived performance was terrible. I’ll be using ACO and the built-in benchmark to make my point because it provides a nice frame time graph and it conveniently exports the results to a file.
Here’s how it looks at Very High to Ultra settings (the exact settings don’t really matter) at 1080p.
As can be seen, the frame times are all over the place. There are more than 50 shades of gray in there.
Now the same settings with a resolution scale of 140%.
Granted, it’s a subtle difference for sure and it is way more obvious in the in-game graph (which, oddly enough, is not exported), but the number of shades has dropped below 50. Putting a higher load on the GPU resulted in better frame times without actually reducing the average FPS number.
Lastly, the same settings with the RX 570.
Now that are stable frame times! Yes, the performance is lower, not by much mind you, but the overall experience is waaaaaay better. As a result, I’ve returned the 5700 XT and I will go with NVIDIA the next time.