Kernel 5.x (5.4.6) slow/freezing during I/O operations

Hi.

Does anybody experiencing something similar?

  1. copying a 5G file on same disk -> everything hangs (mouse pointer too, I'm using Gnome/Wayland)
  2. extracting a big archive - same
  3. copying from usb - similar
  4. importing VirtualBox image - system is not responsive
  5. running VirtualBox machine - system is laggy, hard to write commands, hard to hit elements by mouse

After switching back to 4.19.91 everything works fine.

# cat /sys/block/sda/queue/scheduler  
[mq-deadline] kyber bfq bfq-mq none
$ inxi -MmDC
Machine:
  Type: Laptop System: Dell product: Precision 3520 v: N/A 
  serial: <root required> 
  Mobo: Dell model: 0G0G6Y v: A00 serial: <root required> UEFI: Dell 
  v: 1.16.0 date: 07/03/2019 
Memory:
  RAM: total: 15.55 GiB used: 7.62 GiB (49.0%) 
  RAM Report: 
  permissions: Unable to run dmidecode. Root privileges required. 
CPU:
  Topology: Quad Core model: Intel Core i7-7700HQ bits: 64 type: MT MCP 
  L2 cache: 6144 KiB 
  Speed: 3547 MHz min/max: 800/3800 MHz Core speeds (MHz): 1: 3603 2: 3601 
  3: 3370 4: 3546 5: 3560 6: 3520 7: 3448 8: 3491 
Drives:
  Local Storage: total: 447.13 GiB used: 359.18 GiB (80.3%) 
  ID-1: /dev/sda vendor: Toshiba model: TR200 size: 447.13 GiB

What's going on with 5.x line?

1 Like

I haven´t experienced the same behavior.

you should check in journal

sudo journalctl -b0 -p4

I have experienced I/O errors and freezing on the newer kernels as well. Older kernels and the real time kernels seem to not cause my Intel box issues.

If it's I/O issues, perhaps try move off mq-deadline to use bfq-mq? Since 5.0 kernel, legacy block I/O schedulers were dropped, so only the multi-queue(mq) types are available. Is the same scheduler in use for your 4.19 kernel as the 5.x one?

Yest. It is mq-deadline.

# cat /sys/block/sda/queue/scheduler  
[mq-deadline] kyber bfq bfq-mq non

# uname -a
Linux mobilestation 4.19.91-1-MANJARO #1 SMP Sat Dec 21 11:18:44 UTC 2019 x86_64 GNU/Linux

You copy from/to the same disc? Meaning you read and write to the same disc at the same time?

What is your CPU load while you are copying the 5 GB file? kernel 4.19 vs. 5.4.

How do you do the copy? cp, rsync, dd, ....

What transfer rates do you get MB/s? Is this the same for 4.19 and 5.4?

I was having similar problems and the only thing that helped was changing the scheduler from mq-deadline to bfq .
https://wiki.archlinux.org/index.php/Improving_performance#Changing_I/O_scheduler

Temporarily changing the scheduler :

To list the available schedulers for a device SDA and the active scheduler (in brackets):

$ cat /sys/block/sda/queue/scheduler

To list the available schedulers for all devices:

$ cat /sys/block/*/queue/scheduler

To change the active I/O scheduler to bfq for device sda, use:

# echo bfq > /sys/block/sda/queue/scheduler

This change does not persist across reboots !

Permanently changing the scheduler :

$ vim /etc/udev/rules.d/60-ioschedulers.rules
# set scheduler for Samsung SSD
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="bfq"

Reboot or force udev loading new rules.
https://wiki.archlinux.org/index.php/Udev#Loading_new_rules

If rules fail to reload automatically:

# udevadm control --reload

To manually force udev to trigger your rules:

# udevadm trigger

I have been dealing with similar issues recently and I just changed my schedulers today. The improvement was quite huge. I used a more complicated and different solution because I use many older kernels as well as newer.

I was having speed/freezing and I/O issues on recent kernels as my old vintage 2008 computer usually prefers the older kernels. I thought I would modify the schedulers in use. I just did today's update and made these changes. So far the changes seem very positive.

I'm not sure if my changes are considered best practice, but they seem to be working well from my limited testing.


Create;

/etc/udev/rules.d/60-schedulers.rules
sudo nano /etc/udev/rules.d/60-schedulers.rules

Paste this code into it:

# /etc/udev/rules.d/60-scheduler.rules
#
# set scheduler for non-rotating disks
# WARNING:  blk-mq scheduler REQUIRED scsi_mod.use_blk_mq=1 boot flag
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="none"

# set bfq-mq scheduler for rotating disks
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq-mq"


For older kernels add "scsi_mod.use_blk_mq=1" to /etc/default/grub:

Edit:

/etc/default/grub
sudo nano /etc/default/grub

Add "scsi_mod.use_blk_mq=1" to this line in /etc/default/grub:

GRUB_CMDLINE_LINUX="quiet scsi_mod.use_blk_mq=1"

After that, update grub:

sudo update-grub

On kernels < 4.20. you may need to load the schedulers at boot time with modprobe.

Add the schedulers you wish to modprobe at boot to /etc/modules-load.d/schedulers.conf

Create the conf file:

/etc/modules-load.d/schedulers.conf
sudo nano /etc/modules-load.d/schedulers.conf

With the following contents:

modprobe bfq-mq-iosched
#modprobe kyber-iosched  
#modprobe mq-deadline
#modprobe bfq   

Uncomment any scheduler you wish modprobed at boot.

Save, then reboot.


I'm not sure if I've chosen the best schedulers for use with both older and newer kernels, but things seem much improved.

6 Likes

For me the same sloooow/freeze/lag thing since kernel 5.3.
With the new stable update new test with V5.4 but it not better. For the last weeks I went back to 4.19.
Kernel V5.2 was the last good one for me.

Will try to change the scheduler next week.

Just here to share my experience.
I've a freshly installed Manjaro KDE on a Lenovo L570, kernel 5.4.13-3, and I'm experiencing the same issues described here. System is quite unusable while transferring/copying huge amount of data because the user interface lags a lot.
Changing the I/O Scheduler helps a lot (thanks dinok), but using the old 4.19.97-1 kernel with the default deadline scheduler still gives a smoother user experience and with the bfq scheduler it's even better.
I will stick with the 4.19 kernel for the time being.

Hi.

This may be related to dm-crypt. I'm using encryption for /home

Regards
m.

Change the scheduler with old kernel 5.4/V5.5 made performance better, but not good.
I was back to V4.19.
NOW with V5.5.6 or V5.5.7 it is fine again for me. (Same scheduler as with V4.19:
[mq-deadline] for HD
[none] for SSD
1st time so long problems with new kernel for me. But OK now!

This topic was automatically closed after 180 days. New replies are no longer allowed.

Forum kindly sponsored by