Jump to content
LaptopVideo2Go Forums
Sign in to follow this  
Infinity7

MAX PRE-RENDERED FRAMES BENCHMARKS

Recommended Posts

Infinity7

The Max Pre-rendered Frames setting in the NVidia Control Panel has always been somewhat of a mystery. The description for it says it "limits the number of frames the CPU can prepare before the frames get processed by the GPU. Increasing this value can result in smoother gameplay at lower framerates." The description does not indicate how the setting might effect gameplay at higher framerates. Also there is a misconception that the game is getting increasingly delayed from realtime as the value gets higher. In reality some pre-rendered frames get discarded and never get sent to the GPU. Some machines have weak CPU's and powerful GPU's and with some machines it's the other way around. As you can imagine there will be different results under those 2 different scenarios. In the past the Max Pre-rendered Frames settings has given choices of setting values between 0-8. Now with the 300.xx family of drivers we're given choices of setting values between 1-4 or "Use the 3D application setting." This has caused a few of to scratch our heads and think "what's up with that?"

NVidia Inspector still offers to let us set values of 0-8. I found that when I set the value to "8" in Inspector it would automatically change the setting to "Custom" in NVidia Control Panel and when I set the value to "0" in Inspector it would automatically change the setting to "Use the 3D application setting" in NVidia Control Panel. Therefore "0" is not really "0" Max Pre-rendered Frames but it is whatever the application's default Max Pre-Rendered Frames value is instead. In the search for truth I fired up Heaven Benchmark 3.0 and began some testing with different values for Max Pre-rendered Frames.

Test Machine: i7 975 cpu@4.22 GHz, GTX 580, 1920 x 1080 res, Win 7 64-bit, driver 301.10

Test Settings: 4x AA, 4x Anisotropic, High Textures, Ambient Occlusion off, Normal Tessellation used in DX 11 tests

Scores - Min FPS - Avg FPS - Max FPS ---- Dx version ---- Max Pre-rendered Frames

2202 ---- 46.5 ---- 87.4 ---- 164.0 ---- Dx 9 ---- 1

2329 ---- 48.4 ---- 92.4 ---- 184.2 ---- Dx 9 ---- 2

2332 ---- 47.2 ---- 92.6 ---- 191.8 ---- Dx 9 ---- 3

2334 ---- 47.4 ---- 92.7 ---- 193.4 ---- Dx 9 ---- use 3D app setting (0 in Inspector)

2335 ---- 47.5 ---- 92.7 ---- 195.9 ---- Dx 9 ---- 4

2337 ---- 48.1 ---- 92.8 ---- 197.1 ---- Dx 9 ---- 8

-

1806 ---- 34.5 ---- 71.7 ---- 160.9 --- Dx 11 ---- 1

1809 ---- 32.9 ---- 71.8 ---- 161.7 --- Dx 11 ---- 2

1810 ---- 35.1 ---- 71.8 ---- 165.7 --- Dx 11 ---- 3

1810 ---- 32.8 ---- 71.9 ---- 165.3 --- Dx 11 ---- use 3D app setting (0 in Inspector)

1812 ---- 32.0 ---- 71.9 ---- 177.6 --- Dx 11 ---- 4

1815 ---- 33.8 ---- 72.0 ---- 184.4 --- Dx 11 ---- 8

The Min FPS values in Heaven Benchmark are inconsistent because there is a small delay at the beginning of it so don't pay much attention to the Min FPS values shown. More important to notice here are the Scores and Max FPS values. Overall it appears that the better results are with higher values for Max Pre-Rendered Frames. It does not matter if I do all tests with higher AA settings or with AA completely off. The trend will be the same. Also it appears that in Heaven Benchmark 3.0 the "Use the 3D application setting" is probably "3".

The unexpected conclusion is that people who have been setting Max Pre-Rendered Frames to "0" have actually had more frames pre-rendering than people who have putting the setting on "1" or "2" but it turns out "0" is actually better because "0" is really "3". Ok, now the confusion is all cleared up! :-)

Edited by Infinity7

Share this post


Link to post
Share on other sites
H4ck 3D

Looks like you're referring to 300 series drivers. In the past, people claimed that 0 max pre-rendered frames helped reduce input lag. New drivers include two settings for max pre-rendered frames (check Inspector).

Your Heaven benchmark shows maximum frame rates increasing with higher pre-rendered frames setting, however it does not demonstrate the increase in input lag nor the adverse effect smaller pre-rendered frames had on frame rate during gaming (especially with vsync enabled).

The description does not indicate how the setting might effect gameplay at higher framerates.

In GPU limited situations (or when vsync is enabled), users will most likely experience increased input lag. The more frames the CPU can prepare before the GPU, the higher the latency.

Share this post


Link to post
Share on other sites
Infinity7

Looks like you're referring to 300 series drivers. In the past, people claimed that 0 max pre-rendered frames helped reduce input lag. New drivers include two settings for max pre-rendered frames (check Inspector).

Your Heaven benchmark shows maximum frame rates increasing with higher pre-rendered frames setting, however it does not demonstrate the increase in input lag nor the adverse effect smaller pre-rendered frames had on frame rate during gaming (especially with vsync enabled).

In GPU limited situations (or when vsync is enabled), users will most likely experience increased input lag. The more frames the CPU can prepare before the GPU, the higher the latency.

New drivers have 2 settings for vsync which is what I think you meant. Some frames which are pre-rendered get discarded and not sent to the CPU. I think there is a misconception that if MPRF is set to 5 that the GPU will always process those 5 frames and have to wait for them before getting more. Anyway I don't use vsync much but noticed smoother gameplay with higher MPRF values in CoD MW if I was using lots of AA and Anisotropic Filtering. I was doing very well competitively under those conditions but then I had mouse report rate at 1000 and vsync off.

I did notice during testing that when MPRF was at 1 that the video seemed a bit more jerky at times. I did not expect to find that if MPRF was set to 0 it was really doing 3 MPRF.

Edited by Infinity7

Share this post


Link to post
Share on other sites
H4ck 3D

New drivers have 2 settings for vsync which is what I think you meant. Some frames which are pre-rendered get discarded and not sent to the CPU. I think there is a misconception that if MPRF is set to 5 that the GPU will always process those 5 frames and have to wait for them before getting more.

On my system I see two settings in Inspector related to max pre-rendered frames.

Frames are pre-rendered by the CPU, not the GPU. The CPU prepares next frame(s) ahead of the GPU, while GPU is busy rendering the current one. The CPU rendering ahead is what helps increase performance, but can also introduce a significant amount of input lag. For example mouse/keyboard input gets captured in frames now stuck in a buffer waiting on GPU to process it (many milliseconds later).

Share this post


Link to post
Share on other sites
widescreen169

The input lag is caused when the CPU has to calculate n frames before letting the GPU throw it on the screen with all the details. Higher prerendered frames will help loads in benchmarks as the runs are pre scripted and therefore easy to move along frame by frame. In a gaming situation, everything is random so the CPU has to wait for input, process a few frames, then hand them to the GPU to get drawn. Smoother because all the frames are nicely sequential but laggier because they come later.

Share this post


Link to post
Share on other sites
Infinity7

On my system I see two settings in Inspector related to max pre-rendered frames.

Frames are pre-rendered by the CPU, not the GPU. The CPU prepares next frame(s) ahead of the GPU, while GPU is busy rendering the current one. The CPU rendering ahead is what helps increase performance, but can also introduce a significant amount of input lag. For example mouse/keyboard input gets captured in frames now stuck in a buffer waiting on GPU to process it (many milliseconds later).

I believe that in a fast-paced shooter game if an enemy comes around the corner into view the pre-rendered frames that don't have the enemy in them are dropped and never get sent to the GPU so that new frames containing the enemy do.

By default the USB mouse polling rate (report rate) is 125 and with that the mouse sends new information to the CPU every 8 ms.

If you change the polling rate to 250 the mouse sends new info to the CPU every 4 ms.

If you change the polling rate to 500 the mouse sends new info to the CPU every 2 ms.

If you change the polling rate to 1000 the mouse sends new info to the CPU every 1 ms.

A high mouse polling rate puts a lot more load on the CPU though.

Share this post


Link to post
Share on other sites
H4ck 3D

I should mention that the second setting I was referring to (called "Maximum frames allowed") affects OpenGL. Prior to 300 series drivers, the nVidia control panel setting was only applicable to Direct3D.

I believe that in a fast-paced shooter game if an enemy comes around the corner into view the pre-rendered frames that don't have the enemy in them are dropped and never get sent to the GPU so that new frames containing the enemy do.

I believe that depends on the game developers' implementation. If left unchecked, Direct3D and OpenGL will simply queue up as many commands as it can, irrespective of what actually gets displayed on screen.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
Sign in to follow this  

×
×
  • Create New...