Bug 223243

Summary: daily 404.status-zfs needs to report status using '-v' flag
Product: Base System Reporter: Bob Frazier <bobf>
Component: miscAssignee: freebsd-bugs (Nobody) <bugs>
Status: New ---    
Severity: Affects Many People Keywords: easy, patch
Priority: ---    
Version: 11.0-RELEASE   
Hardware: Any   
OS: Any   

Description Bob Frazier 2017-10-25 18:10:59 UTC
the 404.status-zfs script in /etc/periodic/daily needs to use the '-v' flag when reporting the status via 'zpool status'.  This way, if there are errors affecting specific files, these errors will be reported along with the I/O and checksum error count.

Sometimes after a zpool scrub a file that previously had errors in it stops being in the list.  And it has been my experience that this "fixed" file may have corrupt data in it [I have seen one such example], if the scrub "fixes" the problem but the error in the file remains.

Having a log of these potentially corrupt files in /var/mail/root [or wherever it ends up] would help to diagnose problems that have crept in without notice, by leaving a kind of audit trail, as long as the daily zfs status includes the '-v' flag.

Otherwise the only information you have is "an error happened". And that's less than helpful.

I made a simple fix in my own version of the script:

@@ -24,17 +24,19 @@
 		;;
 	    *)
 		;;
 	esac
 	sout=`zpool status -x`
-	echo "$sout"
+	#echo "$sout"
 	# zpool status -x always exits with 0, so we have to interpret its
 	# output to see what's going on.
 	if [ "$sout" = "all pools are healthy" \
 	    -o "$sout" = "no pools available" ]; then
+		echo "$sout"
 		rc=0
 	else
+		zpool status -v
 		rc=1
 	fi
 	;;
 
     *)

it's not perfect, but does a 'zpool status -v' whenever it detects errors.  A better solution may exist, but this one appears to work.  [if it doesn't, then you might fix it so it does]