Performance regression with NVIDIA 390.25

Heh, I had issues with 384.x series, which are resolved with 390.x series ;p

AMD is finally providing competition to Nvidia and they are open-sourcing their work on Linux and actually supporting the community so when it makes sense I will definitely jump over but need something high-performing for gaming.

Hmm... Would it be possible to offer more than 1 version? I hate to ask because it would be double the work for the Manjaro Team :worried:
Nvidia just needs to get this resolved, but knowing them they will drag their feet...

Yes...confirm with Chromium
No problem at all during my daily work routine (scribus mainly, inkscape, darktable, gimp)

roby

Is this more major than the last regression with the 384 series which was fixed by the 390 series?

I have no performance issues with my setup. It may be DE-specific; KDE in particular appears to have issues.

It's also interesting how any number of people in the Arch thread are using vsynctester.com with Chromium despite the site saying that Vsync is broken in Chromium...

I wonder whether this is a bug in Chromium which the new drivers have highlighted, but people are, of course, blaming the new thing.

From what I'm seeing when trying to use chrome, it is pretty much unusable. It's not just slow, but when scrolling a webpage the entire window will go black and then flash and re-draw section by section taking about 10 seconds to re-draw the screen with each click of the down arrow.

I have seen posts that disabling hardware acceleration has solved it for some users.

It seems to be touch and go... Most people are complaining about chromium/chrome that it's hard to filter out if people are having problems with their DE.

I'm using KDE and I didn't notice anything until I installed chrome and tested. I do have the export __KWIN_TRIPLE_BUFFER=1 in my kwin.sh profile so I don't know if this is helping with the KDE issues or not. If needed I can take that out and test when I get home from work.

I receive these messages in journalctl sporadically with 390.25 upgrade.

nvLock: client timed out, taking the lock

I changed kernel boot to disable high precision event timer and so far it has eliminated freezes during chromium usage.

hpet=disable clocksource=tsc tsc=reliable
2 Likes

So update...
This weekend I was going to play Tomb Raider (2013) Native Port by Feral...

With the 390.25 drivers the game is jerking in the videos and QTE segments... To the point it is unplayable. Very similar to what I see in Chrome.

I reverted back to the 387.34 drivers I built a couple of days ago and the game plays buttery smooth on Ultimate (GTX 980TI).

So generally speaking I don't see an impact on KDE or normal operations, and my Vulkan games (DOOM 2016, etc.) are faster... So it appears that they did fix the performance issues in the 384.x series and memory leaks, but also introduced regressions for some of the older functions.

So right now, I'm going to stick with the 387.34 drivers so I can play my game. :roll_eyes:

PS. I didn't try ryanmusante's trick because I have no info what that does.

1 Like

I currently don't see any performance drops either, but I have to run a few more games to be sure.


Graphics:  Card: NVIDIA GP104 [GeForce GTX 1070]
           Display Server: x11 (X.Org 1.19.6 ) driver: nvidia
           Resolution: 1920x1080@50.00hz, 1920x1200@59.95hz, 2560x1440@59.95hz
           OpenGL: renderer: GeForce GTX 1070/PCIe/SSE2 version: 4.5.0 NVIDIA 390.25

Nothing about the Nvidia 390.25 drivers and problems people are having?
See: Performance regression with NVIDIA 390.25

Hi,

Everything seems to work well.
I only noticed in the NVIDIA server settings panel that the parameters "GPU UTILIZATION" and "Video engine utilization" are always at 0%. No video tearing.

Screenshot_20180217_122743

Direct rendering is OK

Screenshot_20180217_122951

OpenGL is OK

Screenshot_20180217_123110

My nvidia.conf

Screenshot_20180217_123305

Many thanks for Manjaro Team !

EDIT: disable Option "RegistryDwords" "PowerMizerEnable=0x1; PerfLevelSrc=0x2222; PowerMizerLevel=0x1; PowerMizerDefaultAC=0x1;" in my nvidia.conf restore "GPU UTILIZATION" and "Video engine utilization". Maybe force full power annihilates this...

@Spionem
there is no tearing if you activate full composition on

check also nvidia-smi

image
Here the values do change, no tearing but the video doesn't run smoothly. In Chromium I sometimes see flashes and parts of the screen become black (rectangular shape)

Hi DeMus,

can you share your nvidia.conf file?

Thanks.

# nvidia-settings: X configuration file generated by nvidia-settings
# nvidia-settings:  version 390.25  (buildmeister@swio-display-x86-rhel47-03)  Wed Jan 24 20:44:52 PST 2018

Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0" 0 0
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
    Option         "Xinerama" "0"
EndSection

Section "Files"
EndSection

Section "Module"
    Load           "dbe"
    Load           "extmod"
    Load           "type1"
    Load           "freetype"
    Load           "glx"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/psaux"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Keyboard0"
    Driver         "kbd"
EndSection

Section "Monitor"
    # HorizSync source: edid, VertRefresh source: edid
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "LG Electronics 24EN43"
    HorizSync       30.0 - 83.0
    VertRefresh     56.0 - 75.0
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 760"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "Stereo" "0"
    Option         "nvidiaXineramaInfoOrder" "DFP-0"
    Option         "metamodes" "nvidia-auto-select +0+0 {ForceCompositionPipeline=On, ForceFullCompositionPipeline=On}"
    Option         "SLI" "Off"
    Option         "MultiGPU" "Off"
    Option         "BaseMosaic" "off"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

I'm not an expert, but as far as I know you can use either "ForceCompositionPipeline" or "ForceFullCompositionPipeline", not both at the same time.

1 Like

I agree with screwtape.

Threads are split so I will write my answer in this one as well.

When I switch both settings off in the Nvidia settings program and then switch on ForceCompositionPipeline, I can also switch on ForceFullCompositionPipeline.
When I first click ForceFullCompositionPipeline to switch it on, ForceCompositionPipeline is switched on automatically although being grey, instead of blue.
ForceFullCompositionPipeline alone is not possible.

Read the text in the black area (tooltip):
image

I think this is possible in the Nvidia settings, but then "ForceCompositionPipeline" will be deactivated. But if you specify this in nvidia. conf, I can imagine that nothing happens - so neither the one nor the other option is activated.
It's just a hunch, though. Hope you find someone who can better help you with this.

Nvidia says something different in their tooltip, see picture above.

Have you tried using only one of the options?

Besides, this fiddling with KDE Plasma and the Nvidia drivers made me switch to Gnome. It just works. I know it's not an option for many people.
Here is a link whose instructions I followed a few days ago. Maybe it'll help you out:

Forum kindly sponsored by