Bug 217686 - zfs_enable=YES causing vm_fault: pager read error, pid 1 (init)
Summary: zfs_enable=YES causing vm_fault: pager read error, pid 1 (init)
Status: Closed FIXED
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: CURRENT
Hardware: amd64 Any
: --- Affects Many People
Assignee: freebsd-fs (Nobody)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2017-03-10 16:47 UTC by Farhan Khan
Modified: 2018-11-02 10:28 UTC (History)
1 user (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Farhan Khan 2017-03-10 16:47:52 UTC
I am running 12.0-CURRENT. My uname is below:

FreeBSD localhost 12.0-CURRENT FreeBSD 12.0-CURRENT #0 r313113: Fri Feb  3 01:47:24 UTC 2017     root@releng3.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64


At boot right before mounting the disks, my kernel would endlessly print:

vm_fault: pager read error, pid 1 (init)

I found that if I disabled zfs_enable="YES", I did not have this problem. Importing/Mounting a ZFS pool was not a problem.
Comment 1 BB Lister 2018-11-01 11:08:52 UTC
I upgraded from 11.1 to 

11.2-RELEASE-p4 FreeBSD 11.2-RELEASE-p4 #0: Thu Sep 27 08:16:24 UTC 2018     root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64


and Ι have the same problem.

As a workarround, I now have

zfs_enable="NO"

on /etc/rc.conf

and on crontab:


@reboot ( sleep 100; /etc/rc.d/zfs onestart ) > /den/null 2>&1


Now my system is back stable.

Of course, I would like to have zfs_enable="YES" but this causes an infinite number of 

vm_fault: pager read error, pid 1 (init)



If you would like more info please let me know how can I help.
Comment 2 Andriy Gapon freebsd_committer freebsd_triage 2018-11-01 14:31:17 UTC
(In reply to BB Lister from comment #1)
Check mountpoint properties of all your ZFS filesystems to see if any of them gets mounted over /.
Comment 3 BB Lister 2018-11-02 06:11:39 UTC
My /etc/fstab is
# Device                Mountpoint      FStype  Options                         Dump    Pass#
/dev/ada0s1b            none            swap    sw                              0       0
/dev/ada0s1a    /               ufs     ro                              1       1
/dev/ufs/b5home         /home           ufs     rw,nosuid,noexec                2       2
/dev/ufs/b5usr          /usr            ufs     ro                              2       2
/dev/ufs/b5var          /var            ufs     rw,nosuid                       2       2


md              /tmp            mfs     rw,noatime,nosuid,noexec,-s96m          0       0
md              /var/run        mfs     rw,nosuid,noexec,-s4m,-p=777            0       0


/dev/acd0               /cdrom          cd9660  ro,noauto                       0       0
tmpfs           /var/db/ramdisk tmpfs   rw,nosuid,noexec,mode=777       0       0







My ZFS filesystems are all mounted at tank/ 
 mount | grep zfs
tank on /tank (zfs, local, noatime, nfsv4acls)
tank/_Programs on /tank/_Programs (zfs, local, noatime, nfsv4acls)
tank/jails on /tank/jails (zfs, local, noatime, nfsv4acls)
tank/tmp on /tank/tmp (zfs, local, noatime, nosuid, nfsv4acls)
tank/virtualbox on /tank/virtualbox (zfs, local, noatime, nfsv4acls)
tank/.backupdir on /tank/.backupdir (zfs, local, noatime, nfsv4acls)




I can assume that this seems like a race error, where first the ufs directories should be mounted (after fsck) and then the ZFS. Now I assume (perhaps I am wrong) that zfs tries to get mounted before any UFS filesystem is mounted and this causes that problem.

If the UFS directories are mounted first then I can start zfs whithout problem, as I have shown in my previous post.
Comment 4 Andriy Gapon freebsd_committer freebsd_triage 2018-11-02 10:28:04 UTC
So, can you check mountpoint properties of all your ZFS filesystems?