Bug 194976 - [10.1-RELEASE][panic] ZFS TRIM. Kernel dies within seconds after mounting ZFS. vfs.zfs.trim.enabled=0 fixes it
Summary: [10.1-RELEASE][panic] ZFS TRIM. Kernel dies within seconds after mounting ZFS...
Status: Closed DUPLICATE of bug 195061
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: 10.1-RELEASE
Hardware: Any Any
: --- Affects Only Me
Assignee: Steven Hartland
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2014-11-12 23:16 UTC by Palle Girgensohn
Modified: 2015-01-28 21:29 UTC (History)
2 users (show)

See Also:
bugmeister: mfc-stable10?
bugmeister: mfc-stable9?
bugmeister: mfc-stable8?


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Palle Girgensohn freebsd_committer freebsd_triage 2014-11-12 23:16:44 UTC
Hi,

booted in single user mode.
/etc/rc.d/zfs start

two seconds... then... boom:
KDB: stack backtrace:                                                           
#0 0xffffffff8096eec0 at kdb_backtrace+0x60                                     
#1 0xffffffff80933b95 at panic+0x155                                            
#2 0xffffffff80d66c1f at trap_fatal+0x38f                                       
#3 0xffffffff80d66f38 at trap_pfault+0x308                                      
#4 0xffffffff80d6659a at trap+0x47a                                             
#5 0xffffffff80d4c482 at calltrap+0x8                                           
#6 0xffffffff819e8a9c at dmu_write_uio_dnode+0xcc                               
#7 0xffffffff819e89ab at dmu_write_uio_dbuf+0x3b                                
#8 0xffffffff81a7a5b2 at zfs_freebsd_write+0x5e2                                
#9 0xffffffff80e85c35 at VOP_WRITE_APV+0x145                                    
#10 0xffffffff809e37e9 at vn_rdwr+0x299                                         
#11 0xffffffff81a36e55 at vdev_file_io_start+0x165                              
#12 0xffffffff81a54676 at zio_vdev_io_start+0x326                               
#13 0xffffffff81a51382 at zio_execute+0x162                                     
#14 0xffffffff81a84e9e at trim_map_commit+0x2ae                                 
#15 0xffffffff81a84ce7 at trim_map_commit+0xf7                                  
#16 0xffffffff81a84ce7 at trim_map_commit+0xf7                                  
#17 0xffffffff81a84a32 at trim_thread+0xf2                                      
Uptime: 1m28s                                 


I added vfs.zfs.trim.enabled=0 to /boot/loader.conf and it boots nicely.

It is an HP DL360 with ciss0: <HP Smart Array P410i>. Just a single volume presented to the OS, so it is not JBOD, and there are no SSD disks involved.
Comment 1 Steven Hartland freebsd_committer freebsd_triage 2014-11-16 16:10:05 UTC
This looks like is the same issue as noted on:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=195061

Didn't notice that you're mounting a file backed volume not geom backed at the time.
Comment 2 Palle Girgensohn freebsd_committer freebsd_triage 2014-11-16 18:10:31 UTC
File backed volume? No, it uses gpt partitions?

# zpool history
History for 'tank':
2013-02-25.14:56:06 zpool create -f -o altroot=/mnt -o cachefile=/var/tmp/zpool.cache tank /dev/gpt/disk0.nop
2013-02-25.14:56:22 zpool export tank
2013-02-25.14:58:23 zpool import -o altroot=/mnt -o cachefile=/var/tmp/zpool.cache tank
2013-02-25.15:00:15 zpool set bootfs=tank tank
...

It is a zfs-on-root setup, probably setup according to the guidelines of https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE
Comment 3 Steven Hartland freebsd_committer freebsd_triage 2014-11-16 20:24:32 UTC
(In reply to Palle Girgensohn from comment #2)
> File backed volume? No, it uses gpt partitions?
> 
> # zpool history
> History for 'tank':
> 2013-02-25.14:56:06 zpool create -f -o altroot=/mnt -o
> cachefile=/var/tmp/zpool.cache tank /dev/gpt/disk0.nop
> 2013-02-25.14:56:22 zpool export tank
> 2013-02-25.14:58:23 zpool import -o altroot=/mnt -o
> cachefile=/var/tmp/zpool.cache tank
> 2013-02-25.15:00:15 zpool set bootfs=tank tank
> ...
> 
> It is a zfs-on-root setup, probably setup according to the guidelines of
> https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE

Your trace disagrees as it mentions vdev_file_io_start which is only called for a volume created from a file if it was gpt that would be vdev_geom_io_start.

Can you let us know your pool layout with zdb please.
Comment 4 Palle Girgensohn freebsd_committer freebsd_triage 2014-11-16 20:34:26 UTC
Ah, OK... Haha... Just realized, my collegue has apparently set up a test pool, that one is indeed from files. OK, your absolutely right, it is file based.

# zdb -C testpool

MOS Configuration:
        version: 5000
        name: 'testpool'
        state: 0
        txg: 4473453
        pool_guid: 16429923306100190259
        hostid: 2280479956
        hostname: 'hostname.domain.tld'
        vdev_children: 1
        vdev_tree:
            type: 'root'
            id: 0
            guid: 16429923306100190259
            create_txg: 4
            children[0]:
                type: 'mirror'
                id: 0
                guid: 67677676813072578
                whole_disk: 0
                metaslab_array: 33
                metaslab_shift: 19
                ashift: 9
                asize: 204996608
                is_log: 0
                create_txg: 4
                children[0]:
                    type: 'file'
                    id: 0
                    guid: 13613914571249199622
                    path: '/testarea/newdisk1'
                    DTL: 44
                    create_txg: 4
                children[1]:
                    type: 'file'
                    id: 1
                    guid: 7225271787511829095
                    path: '/testarea/newdisk2'
                    DTL: 48
                    create_txg: 4
        features_for_read:
Comment 5 Palle Girgensohn freebsd_committer freebsd_triage 2014-11-16 23:46:30 UTC
There are actually two pools here.

tank and testpool

tank is from a gpart device

testpool is from two files in a zfs file system within tank. A bit odd as a setup, but as the name reveals, it is just a testpool that was never properly cleaned up.
Comment 6 Steven Hartland freebsd_committer freebsd_triage 2014-11-17 09:05:09 UTC
Thanks for confirming. Did you get chance to test the patch and does it fix the issue for you?
Comment 7 Steven Hartland freebsd_committer freebsd_triage 2014-11-17 11:37:48 UTC
This was fixed by https://svnweb.freebsd.org/base?view=revision&revision=274619

Commit hook didn't trigger to note this for some reason.
Comment 8 Xin LI freebsd_committer freebsd_triage 2015-01-28 21:29:19 UTC

*** This bug has been marked as a duplicate of bug 195061 ***