One of our drives is starting to show errors with smart self tests. Output of zpool status: pool: zmnt state: ONLINE scan: resilvered 1.03T in 14h11m with 0 errors on Mon Dec 11 22:23:44 2017 config: NAME STATE READ WRITE CKSUM zmnt ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gpt/zfs0 ONLINE 0 0 0 gpt/zfs1 ONLINE 0 0 0 gpt/zfs2 ONLINE 0 0 0 gpt/zfs3 ONLINE 0 0 0 gpt/zfs4 ONLINE 0 0 0 gpt/zfs5 ONLINE 0 0 0 logs mirror-1 ONLINE 0 0 0 gpt/zil0 ONLINE 0 0 0 gpt/zil1 ONLINE 0 0 0 spares gpt/zfs6 AVAIL gpt/zfs7 AVAIL errors: No known data errors I tried to replace gpt/zfs4 with gpt/zfs6: zpool replace zmnt /dev/gpt/zfs4 /dev/gpt/zfs6 After the resilver finished the pool was in the same state as before...
Perhaps because it is already marked as a spare (part of the pool) and the ZFS later hasn’t had any errors yet? zpool remove the spare and then re-issue the replace?
I already did what you suggested and the system did what I wanted to achieve. Its still surprising to watch it resilver for 14h and then ending up in the same state as before... I think when you request a replace (even when replacing with an hot spare) it should just detach the replaced drive when finished. So maybe that's not a bug report but a feature request.