Summary: | [10.1-RELEASE][panic] ZFS TRIM. Kernel dies within seconds after mounting ZFS. vfs.zfs.trim.enabled=0 fixes it | ||
---|---|---|---|
Product: | Base System | Reporter: | Palle Girgensohn <girgen> |
Component: | kern | Assignee: | Steven Hartland <smh> |
Status: | Closed DUPLICATE | ||
Severity: | Affects Only Me | CC: | delphij, smh |
Priority: | --- | Flags: | bugmeister:
mfc-stable10?
bugmeister: mfc-stable9? bugmeister: mfc-stable8? |
Version: | 10.1-RELEASE | ||
Hardware: | Any | ||
OS: | Any | ||
See Also: | https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=195061 |
Description
Palle Girgensohn
2014-11-12 23:16:44 UTC
This looks like is the same issue as noted on: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=195061 Didn't notice that you're mounting a file backed volume not geom backed at the time. File backed volume? No, it uses gpt partitions? # zpool history History for 'tank': 2013-02-25.14:56:06 zpool create -f -o altroot=/mnt -o cachefile=/var/tmp/zpool.cache tank /dev/gpt/disk0.nop 2013-02-25.14:56:22 zpool export tank 2013-02-25.14:58:23 zpool import -o altroot=/mnt -o cachefile=/var/tmp/zpool.cache tank 2013-02-25.15:00:15 zpool set bootfs=tank tank ... It is a zfs-on-root setup, probably setup according to the guidelines of https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE (In reply to Palle Girgensohn from comment #2) > File backed volume? No, it uses gpt partitions? > > # zpool history > History for 'tank': > 2013-02-25.14:56:06 zpool create -f -o altroot=/mnt -o > cachefile=/var/tmp/zpool.cache tank /dev/gpt/disk0.nop > 2013-02-25.14:56:22 zpool export tank > 2013-02-25.14:58:23 zpool import -o altroot=/mnt -o > cachefile=/var/tmp/zpool.cache tank > 2013-02-25.15:00:15 zpool set bootfs=tank tank > ... > > It is a zfs-on-root setup, probably setup according to the guidelines of > https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE Your trace disagrees as it mentions vdev_file_io_start which is only called for a volume created from a file if it was gpt that would be vdev_geom_io_start. Can you let us know your pool layout with zdb please. Ah, OK... Haha... Just realized, my collegue has apparently set up a test pool, that one is indeed from files. OK, your absolutely right, it is file based. # zdb -C testpool MOS Configuration: version: 5000 name: 'testpool' state: 0 txg: 4473453 pool_guid: 16429923306100190259 hostid: 2280479956 hostname: 'hostname.domain.tld' vdev_children: 1 vdev_tree: type: 'root' id: 0 guid: 16429923306100190259 create_txg: 4 children[0]: type: 'mirror' id: 0 guid: 67677676813072578 whole_disk: 0 metaslab_array: 33 metaslab_shift: 19 ashift: 9 asize: 204996608 is_log: 0 create_txg: 4 children[0]: type: 'file' id: 0 guid: 13613914571249199622 path: '/testarea/newdisk1' DTL: 44 create_txg: 4 children[1]: type: 'file' id: 1 guid: 7225271787511829095 path: '/testarea/newdisk2' DTL: 48 create_txg: 4 features_for_read: There are actually two pools here. tank and testpool tank is from a gpart device testpool is from two files in a zfs file system within tank. A bit odd as a setup, but as the name reveals, it is just a testpool that was never properly cleaned up. Thanks for confirming. Did you get chance to test the patch and does it fix the issue for you? This was fixed by https://svnweb.freebsd.org/base?view=revision&revision=274619 Commit hook didn't trigger to note this for some reason. *** This bug has been marked as a duplicate of bug 195061 *** |