Bug 197513 - zpool status prints non-helpful block size warnings on CCISS volumes
Summary: zpool status prints non-helpful block size warnings on CCISS volumes
Status: New
Alias: None
Product: Base System
Classification: Unclassified
Component: bin (show other bugs)
Version: 9.3-RELEASE
Hardware: amd64 Any
: --- Affects Some People
Assignee: freebsd-fs mailing list
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2015-02-10 12:11 UTC by Gert Doering
Modified: 2015-11-17 21:59 UTC (History)
2 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Gert Doering 2015-02-10 12:11:50 UTC
Hiya,

on systems having their hard disks on an HP CCISS controller, 9.3-RELEASE (and I suspect "everything later as well") zpool status prints this warning:

nsc1-base-la$ zpool status
  pool: nsc1-base-la
 state: ONLINE
status: One or more devices are configured to use a non-native block size.
        Expect reduced performance.
action: Replace affected devices with devices that support the
        configured block size, or migrate data to a properly configured
        pool.
  scan: scrub repaired 0 in 0h6m with 0 errors on Mon Feb  2 19:17:26 2015
config:

        NAME        STATE     READ WRITE CKSUM
        nsc1-base-la  ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            da0p3   ONLINE       0     0     0  block size: 512B configured, 1048576B native
            da1p3   ONLINE       0     0     0  block size: 512B configured, 1048576B native

I understand why you want to print "non-native block size" warnings in case of 512B/4K mismatch, but for the CCISS that seems to report a native block size of 1Mb, this is not providing useful information.

To the contrary, this is clogging the output of the daily "periodic" mail (if daily_status_zfs_enable=YES is enabled, which I do to see if some issues arise) with lengthy extra text, requiring to skim much more stuff to see "is something really broken, or is this just the usual programme".

dmesg on controller and disks, for reference:

ciss0: <HP Smart Array E200i> port 0x4000-0x40ff mem 0xfdf80000-0xfdffffff,0xfdf70000-0xfdf77fff irq 18 at device 8.0 on pci11
ciss0: PERFORMANT Transport
ciss0: got 2 MSI messages]
...
da0 at ciss0 bus 0 scbus0 target 0 lun 0
da0: <COMPAQ RAID 0 OK> Fixed Direct Access SCSI-5 device
da0: Serial Number P675MU2201
da0: 135.168MB/s transfers
da0: Command Queueing enabled
da0: 69973MB (143305920 512 byte sectors: 255H 32S/T 17562C)
da0: quirks=0x1<NO_SYNC_CACHE>
da1 at ciss0 bus 0 scbus0 target 1 lun 0
da1: <COMPAQ RAID 0 OK> Fixed Direct Access SCSI-5 device
da1: Serial Number P675MU2201  
da1: 135.168MB/s transfers
da1: Command Queueing enabled  
da1: 69973MB (143305920 512 byte sectors: 255H 32S/T 17562C)
da1: quirks=0x1<NO_SYNC_CACHE>

(not using the RAID controller for actually RAID setups, just to present JBOD to FreeBSD, and using ZFS for RAID.  Using a different controller is not really an option on blade server hardware)

While this touches the same area as 187905, it's actually a different issue as cciss hides the true block size.