Last March 11, 2016, a patch was released for Rise of the Tomb Raider which enables support for DirectX 12. We won’t bore you with the technical details but one of the benefits of DirectX 12 is it improves gaming performance by more efficient use of CPU cores available. Here is the statement of Nixxes Software about their implementation of DirectX 12:
“For Rise of the Tomb Raider the largest gain DirectX 12 will give us is the ability to spread our CPU rendering work over all CPU cores, without introducing additional overhead. This is especially important on 8-core CPUs like Intel i7’s or many AMD FX processors.
Let me explain how this helps the performance of your game. When using DirectX 11, in situations where the game is under heavy load – for example in the larger hubs of the game – the individual cores may not be able to feed a fast GPU like an NVIDIA GTX 980 or even NVIDIA GTX 970 quick enough. This means the game may not hit the desired frame-rate, requiring you to turn down settings that impact CPU performance. Even though the game can use all your CPU cores, the majority of the DirectX 11 related work is all happening on a single core. With DirectX 12 a lot of the work is spread over many cores, and the framerate of the game will run at can be much higher for the same settings.”
While some gamers using the latest CPU’s may not care about it, this is definitely good news for gamers who are still using relatively old CPU’s like AMD Phenom II X4 965BE and Intel Core i7 920. Not all gamers can or want to replace their CPU every time there is a new generation of CPU’s. It will be interesting to see how the CPU affects game performance in DirectX 12 mode.
How We Tested
The PC we used for testing is shown below.
Processor: Intel Core i7 4790K 4.0 GHz (Turbo disabled)
Graphics Card: Gigabyte GeForce GTX 970 4GB
Motherboard: Gigabyte Z97-D3H rev 1.0
Memory: 4GBx2 DDR3 2400 MHz
Power Supply: Corsair HX 620
Driver: GeForce 368.39
Operating System: Windows 10 64-bit
There is still no user-friendly benchmarking tool like FRAPS that is compatible with DirectX 12. The game has a built-in benchmark tool but we won’t use it because we have found that it does not represent the performance you’ll get when you are playing the game. For now, we will just show the CPU usage and the GPU usage which were recorded using HWiNFO. We can use those information to determine if DirectX 12 really delivers on its promise. We don’t have an old CPU on hand right now so we just reduced the clock speed of Core i7 4790K to 2.0 GHz to introduce bottleneck for our GeForce GTX 970. That is a 50% reduction on clock speed and, as you will see later, it does bottleneck the GeForce GTX 970. For additional comparison, we also tested with Core i7 4790K running at 3.0 GHz.
Like with our other articles, the test results are the average of 3 benchmark runs.
The game settings used are shown below:
Resolution: 1920 x 1080
Anti-aliasing: SMAA
Texture Quality: High
Anisotropic Filter: 16x
Shadow Quality: Very High
Sun Soft Shadows: Very High
Depth of Field: Very High
Level of Detail: Very High
Dynamic Foliage: Very High
Ambient Occlusion: On
Pure Hair: On
Specular Reflection Quality: Very High
Vignette Blur: Off
Motion Blur: Off
Bloom: On
Tessellation: On
Screen Space Reflections: On
Lens Flares: On
Film Grain: Off
Before we proceed with the test results, we need first to show you the part of the games that we benchmarked. Read this article to know the importance of game benchmark methodology.
Test Results
Though we can’t show you the graph of frame rate over time, we took screenshots of a particular scene (about 10 seconds in the benchmark sequence) in DX11 mode and in DX12 mode. Steam has an in-game frame counter but it is too small to see in screenshots. We used Dxtory to show the frame rate in the screenshots.
(Core i7 4790K @ 2.0 GHz – DX11 mode)
(Core i7 4790K @ 2.0 GHz – DX12 mode)
(Core i7 4790K @ 3.0 GHz – DX11 mode)
(Core i7 4790K @ 3.0 GHz – DX12 mode)
So, what now?
DirectX 12 does increase performance when the CPU is bottlenecking the GPU. This is shown by the CPU usage, GPU usage and screenshot comparisons when running Core i7 4790K at 2.0 GHz. In DX11 mode, the GPU usage was consistently below 90% in the first 20 seconds of the benchmark sequence and that low GPU usage was reflected by the lower frame rate shown in the screenshots above. Running the game in DX12 mode increased the frame rate by approximately 16% in that particular scene. Also notable was the higher CPU usage in DX12. That shows the CPU is being used more effectively in DX12 mode. However, when the CPU clock speed was increased to 3.0 GHz, there was almost no difference between DX11 mode and DX12 mode. That means the CPU is no longer the limiting factor.
The results we’ve shown are not conclusive yet and we’re not suggesting that it is ok to pair old or slow CPU’s with relatively fast GPU’s like the GeForce GTX 970. We have just confirmed that DirectX 12 really has benefits on the CPU. There is still a lot to see in DirectX 12 and we will do further tests on various combinations of CPU and GPU.
For now, let’s get back to gaming.
3 Comments
wow kaya b nan amd yung ganyang temperature? :O
depende sa card yan
DX12 gave me a 6 FPS boost in Rise of the Tomb Raider 🙂
I recently updated from Win7 HDD to Win10 SDD. I was getting 41fps earlier in ROTR on patch# 6. Now on Win10 with patch #7 and DX12 I am getting 47fps. Crimson driver version earlier and now were same (16.7.3).
The game crashed on first run and there were some graphic glitches on second run. But after that everything is smooth. The game looks more beautiful in DX12 and more objects are visible than DX11. Loving ROTR on DirectX 12.
R9 270X (GCN 1.0) OCed @ 1165/1500
[email protected]