So I am still struggling with the powersaving features of my nvidia-optimus notebook with nvidia turing chip. Perhaps someone could guide me through the investigation of my specific case.
I am using an XMG Fusion 15 notebook with Nvidia 1660ti mobile chip.
System: Host: luke-pc Kernel: 5.6.3-2-MANJARO x86_64 bits: 64 compiler: gcc v: 9.3.0 Desktop: i3 4.18 Distro: Manjaro Linux Machine: Type: Laptop System: Schenker product: XMG FUSION 15 (XFU15L19) v: Late 2019 serial: <root required> Mobo: Intel model: LAPQC71A v: K54899-303 serial: <root required> UEFI: Intel v: QCCFL357.0062.2020.0313.1530 date: 03/13/2020 Battery: ID-1: BAT0 charge: 93.5 Wh condition: 93.5/93.5 Wh (100%) model: standard status: Full CPU: Topology: 6-Core model: Intel Core i7-9750H bits: 64 type: MT MCP arch: Kaby Lake rev: A L2 cache: 12.0 MiB flags: avx avx2 lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx bogomips: 62431 Speed: 869 MHz min/max: 800/2600 MHz Core speeds (MHz): 1: 807 2: 800 3: 800 4: 800 5: 800 6: 800 7: 800 8: 800 9: 800 10: 800 11: 800 12: 800 Graphics: Device-1: Intel UHD Graphics 630 driver: i915 v: kernel bus ID: 00:02.0 Device-2: NVIDIA TU116M [GeForce GTX 1660 Ti Mobile] vendor: Intel driver: N/A bus ID: 01:00.0 Display: x11 server: X.Org 1.20.8 driver: intel resolution: 1920x1080~144Hz OpenGL: renderer: Mesa Intel UHD Graphics 630 (CFL GT2) v: 4.6 Mesa 20.0.4 direct render: Yes Audio: Device-1: Intel Cannon Lake PCH cAVS driver: snd_hda_intel v: kernel bus ID: 00:1f.3 Sound Server: ALSA v: k5.6.3-2-MANJARO Network: Device-1: Realtek RTL8111/8168/8411 PCI Express Gigabit Ethernet vendor: Intel driver: r8169 v: kernel port: 3000 bus ID: 3d:00.0 IF: enp61s0 state: up speed: 1000 Mbps duplex: full mac: b0:25:aa:33:16:bd Device-2: Intel Wi-Fi 6 AX200 driver: iwlwifi v: kernel port: 3000 bus ID: 3e:00.0 IF: wlp62s0 state: up mac: 94:e6:f7:a4:27:57 IF-ID-1: docker0 state: down mac: 02:42:81:cb:5d:5f Drives: Local Storage: total: 894.25 GiB used: 643.96 GiB (72.0%) ID-1: /dev/nvme0n1 vendor: Samsung model: SSD 970 EVO Plus 500GB size: 465.76 GiB ID-2: /dev/nvme1n1 vendor: Corsair model: Force MP510 size: 894.25 GiB Partition: ID-1: / size: 457.16 GiB used: 80.84 GiB (17.7%) fs: ext4 dev: /dev/nvme0n1p2 Sensors: System Temperatures: cpu: 58.0 C mobo: 44.0 C Fan Speeds (RPM): N/A Info: Processes: 288 Uptime: 21h 47m Memory: 15.49 GiB used: 3.12 GiB (20.1%) Init: systemd Compilers: gcc: 9.3.0 Shell: zsh v: 5.8 inxi: 3.0.37
- video-hybrid-intel-nvidia-440xx-prime driver installed
- optimus-manager installed (I decided that other solutions does not meet my expectations)
- confirmed that the D3 Powermanagement things are configured as described by nvidia
- watched powertop if nvidia gpu is at 100% or at 0%
- installed and configured tlp with tlpui
Normally the nvidia gpu should "turn off" itself, if it is not used. Until yesterday I thought this is not working at all, because I mainly used the hybrid mode of optimus-manager and powertop always did show it as "on". The fans were running mostly all the time with little breaks in between.
Then I read in another thread that it seems to work in intel mode. So I switched and it did not work for me initially.
I pulled the power cable and after some minutes the nvidia gpu did go to sleep.
The fans got silent and the batterytime increased from ~4.5h to ~9h.
So I would like to investigate, how I can get to this state in hybrid mode and with powercable plugged.
If I plug in the powercable afterwards, the gpu stays in sleep mode.
But I cannot switch to nvidia or hybrid mode anymore. I suppose the nvidia gpu does not wakre up anymore.
Although when I executed the inxi command just now, the nvidia gpu did wake up. And did again not go to sleep by itself.
Now I was able switch to hybrid mode again, plug the powercable in, switch to intel mode and unplug the powercable and then the gpu went to sleep again.
Plug and unplug without switching did not sent the gpu to sleep...
Current Optimus-manager settings:
- start with intel
- switching method: none
- PCI reset: no
- PCI power control: off
- PCI remove: off
- auto loggoff: true
- intel: driver: intel, modeset: on
- nvidia: modeset: on, PAT: on, Ignore ABI: on, Overclocking: on, TrippleBuffer: off
Thanks in advance and best regards,