NVIDIA offloading not powering down GPU

Hi,
I followed the tutorial from https://archived.forum.manjaro.org/t/install-nvidia-prime-on-manjaro-18-1-4/114993 to get the NVIDIA offloading working but it seems that my GPU is always on (RTX 2070.

When I type

glxinfo | grep "OpenGL renderer"

the output is:
OpenGL renderer string: Mesa DRI Intel(R) UHD Graphics 630 (Coffeelake 3x8 GT2)

and

__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia glxinfo | grep "OpenGL renderer"

gives:
OpenGL renderer string: GeForce RTX 2070 with Max-Q Design/PCIe/SSE2

But If I launch, blender, vkcube or any other app, it doesn't matter if I append the env variables, it will always use my nvidia card

the output from inxi -Fx:

System:    Host: legion Kernel: 5.4.2-1-MANJARO x86_64 bits: 64 compiler: gcc v: 9.2.0 Desktop: KDE Plasma 5.17.4 
           Distro: Manjaro Linux 
Machine:   Type: Laptop System: LENOVO product: 81UH v: Lenovo Legion Y740-15IRHg serial: <root required> 
           Mobo: LENOVO model: LNVNB161216 v: SDK0R32862 WIN serial: <root required> UEFI: LENOVO v: BVCN10WW(V1.06) 
           date: 06/06/2019 
Battery:   ID-1: BAT1 charge: 53.4 Wh condition: 54.2/57.4 Wh (95%) model: 74726570786C6543 324750334337314C status: Unknown 
CPU:       Topology: 6-Core model: Intel Core i7-9750H bits: 64 type: MT MCP arch: Kaby Lake rev: A L2 cache: 12.0 MiB 
           flags: avx avx2 lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx bogomips: 62431 
           Speed: 2600 MHz min/max: 800/2600 MHz Core speeds (MHz): 1: 2600 2: 2600 3: 2600 4: 2600 5: 2600 6: 2600 7: 2600 
           8: 2600 9: 2600 10: 2601 11: 2600 12: 2600 
Graphics:  Device-1: Intel UHD Graphics 630 vendor: Lenovo driver: i915 v: kernel bus ID: 00:02.0 
           Device-2: NVIDIA TU106BM [GeForce RTX 2070 Mobile] vendor: Lenovo driver: nvidia v: 440.36 bus ID: 01:00.0 
           Display: x11 server: X.Org 1.20.6 driver: modesetting,nvidia resolution: 1920x1080~144Hz 
           OpenGL: renderer: Mesa DRI Intel UHD Graphics 630 (Coffeelake 3x8 GT2) v: 4.5 Mesa 19.2.7 direct render: Yes 
Audio:     Device-1: Intel Cannon Lake PCH cAVS vendor: Lenovo driver: snd_hda_intel v: kernel bus ID: 00:1f.3 
           Device-2: NVIDIA TU106 High Definition Audio vendor: Lenovo driver: snd_hda_intel v: kernel bus ID: 01:00.1 
           Sound Server: ALSA v: k5.4.2-1-MANJARO 
Network:   Device-1: Intel Wireless-AC 9560 [Jefferson Peak] vendor: Bigfoot Networks driver: iwlwifi v: kernel port: 5000 
           bus ID: 00:14.3 
           IF: wlp0s20f3 state: up mac: a8:6d:aa:e8:10:cf 
           Device-2: Realtek RTL8111/8168/8411 PCI Express Gigabit Ethernet vendor: Lenovo driver: r8169 v: kernel port: 3000 
           bus ID: 3f:00.0 
           IF: enp63s0 state: down mac: 9c:5a:44:68:28:5c 
Drives:    Local Storage: total: 2.96 TiB used: 1.56 TiB (52.6%) 
           ID-1: /dev/nvme0n1 vendor: SK Hynix model: HFS256GD9TNG-L3A0B size: 238.47 GiB 
           ID-2: /dev/sda vendor: Seagate model: ST1000LM049-2GH172 size: 931.51 GiB 
           ID-3: /dev/sdb type: USB vendor: Western Digital model: WD Elements 25A2 size: 1.82 TiB 
Partition: ID-1: / size: 233.44 GiB used: 21.10 GiB (9.0%) fs: ext4 dev: /dev/nvme0n1p2 
Sensors:   System Temperatures: cpu: 55.0 C mobo: N/A 
           Fan Speeds (RPM): N/A 
Info:      Processes: 297 Uptime: 42m Memory: 15.52 GiB used: 3.55 GiB (22.9%) Init: systemd Compilers: gcc: 9.2.0 
           clang: 9.0.0 Shell: bash v: 5.0.11 inxi: 3.0.37 

I tried different configs for Xorg but they all behaved like this.

Anyone have a clue on how to put my nvidia to "sleep" and only use it when invoking the env variables?

Thanks in advance

I have only a clue, but not tested myself, nor am sure how it should behave, as it's a brand new feature.
You may want to add an option for nvidia power management.
As I understand, you may use this in a conf file at

/etc/modprobe.d/nvidiapm.conf

(the file name is irrelevant), with this content:

options nvidia NVreg_DynamicPowerManagement=0x01

Alternativelly, I suppose you can set this option in your Xorg conf, in nvidia driver section, like this

Option "NVreg_DynamicPowerManagement=0x01"

As I said, I have not tested this, my nvidia is old (390xx) :man_shrugging:.

1 Like

I've not tested this myself but as I understood, the gpu will stay powered on all the time. Newer generations might profit from power management to reduce power consumption, but completly turning the gpu off isn't possible. Also see


1 Like

@dglt If the Nvidia prime render offload setup is correct. Should xorg be the only process running with nvidia whether or not I run a program with nvidia using prime-run after installing the nvidia-prime package or using __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia <command> when the nvidia-smi command is run as shown below:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.44       Driver Version: 440.44       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce 930MX       Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   43C    P8    N/A /  N/A |     13MiB /  2004MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0       683      G   /usr/lib/Xorg                                 12MiB |
+-----------------------------------------------------------------------------+
1 Like

yes, that xorg process needs to be there even when the nvidia gpu is not being used by other apps/games.

another way to confirm it's setup properly is (assuming prime-run is how you launch)

prime-run glxgears
prime-run vkcube

and check nvidia-smi to see if both are running on the nvidia gpu.

Thanks for the info and they are both listed when nvidia-smi is run. FPS of glxgears is at least 60FPS as it is synchronised with the refresh rate. When I type and run vkcube alone it is listed in nvidia-smi but running glxgears alone is not listed when running nvidia-smi, if the configuration is still correctm why is vkcube listed in nvidia-smi if not run using the env variables as @guiltzen also experienced? . Is xorg rendered on both intel gpu and nvidia gpu or just the nvidia gpu?

even though the nvidia remains powered on all the time, it only gets used when running apps/games with the launch parameters. without those paramters everything is done by the intel igpu :nauseated_face:

this really makes little sense to use on anything other than turing class gpu's since they are the only ones that have the ability to go into a lowered (not off) power state when not being used.

You replied fast before I finished editing (took a while while investigating vkcube and glxgears). Would like to know about vkcube being listed in nvidia-smi whether or not I use primerun to run it. Also when running browsers using prime-run, they are not listed in nvidia-smi.

maybe it's to do with the prime-run package? you can try using the variables on their own without prime-run

when i was playing with render offload i just added these alias' to my .zshrc

# Nvidia render-offload alias
alias nvr="__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia"
alias vkr="__NV_PRIME_RENDER_OFFLOAD=1"

for vulkan apps
vkr vkcube
for opengl
nvr glxgears

but that was just my preference as i would rather type 3 letters and the variable for vulkan apps is not the same for opengl apps so im not sure how prime-run handles that. :man_shrugging:

1 Like

When I run vkcube it appears in the processes list when running nvidia-smi whether or not the variables are used to run vkcube in but when running glxgears it only appears when running nvidia-smi only when glxgears is run using the variables. My hunch is applications like vkcube and blender will run on the nvidia gpu instead of the intel one if they detect a powered on nvidia gpu. The nvidia-prime package which I installed comes with the necessary xorg config and a script which is used to run applications with the nvidia gpu. Instead of creating a file like optimus.config in the tutorial stated in this topic, I use the nvidia-prime package instead. I new of the package in this arch wiki: https://wiki.archlinux.org/index.php/PRIME#PRIME_render_offload and is a new package and is in the Manjaro stable branch after the last stable update of 2019.

1 Like

it's probably because you have nvidia's vulkan icd loader installed and not for intel so it uses the only vulkan icd loader available.

2 Likes

I only installed the vulkan-tools package which comes with vkcube

the vulkan packages would be
nvidia:
vulkan-icd-loader lib32-vulkan-icd-loader
intel:
vulkan-intel

1 Like

thanks @dglt for this information, now everything feels more "sane". With the package vulkan-intel when I launch vkcube it uses the iGPU, and when using the env variables it uses the nvidia GPU (it appears also on nvidia-smi).
So in the end my GPU uses 5W when idle, and is called automatically when needed by certain softwares (Blender when rendering automatically uses my GPU for example, no need for env variables).
In my case the bumblebee solution seems more "economic" since it uses 0W when off, and can be launched with no penalties for cuda codes or background rendering with blender.
Anyway I'll be playing with the nvidia solution. Thanks again for clarifying the issue!

1 Like

The nvidia vulkan packages were already installed and recently installed vulkan-intel and run vkcube without the variables it uses the iGPU and is not listed in nvidia-smi unless the variables are used. Thanks for all your help.

Edit: @AgentS Have tested the nvidia power management with the option and it works for my optimus laptop with Geforce 930MX.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.

Forum kindly sponsored by