Using gmirrors as swap on our hosts in the cluster and I noticed that I can no longer get a crashdump out of a system if its swap is gmirror based. # gmirror status Name Status Components mirror/swap0 COMPLETE da0p2 (ACTIVE) da1p2 (ACTIVE) mirror/swap1 COMPLETE da2p2 (ACTIVE) da3p2 (ACTIVE) mirror/swap2 COMPLETE da5p2 (ACTIVE) da6p2 (ACTIVE) # gmirror list Geom name: swap0 State: COMPLETE Components: 2 Balance: load Slice: 4096 Flags: NONE GenID: 0 SyncID: 1 ID: 2592075302 Type: AUTOMATIC Providers: 1. Name: mirror/swap0 Mediasize: 8589934080 (8.0G) Sectorsize: 512 Mode: r1w1e0 Consumers: 1. Name: da0p2 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 131072 Mode: r1w1e1 Sta GenID: 0 SyncID: 1 ID: 1783011607 2. Name: da1p2 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 131072 Mode: r1w1e1 State: ACTIVE Priority: 1 Flags: NONE GenID: 0 SyncID: 1 ID: 74779342 Geom name: swap1 State: COMPLETE Components: 2 Balance: load Slice: 4096 Flags: NONE GenID: 0 SyncID: 1 ID: 787141291 Type: AUTOMATIC Providers: 1. Name: mirror/swap1 Mediasize: 8589934080 (8.0G) Sectorsize: 512 Mode: r1w1e0 Consumers: 1. Name: da2p2 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 131072 Mode: r1w1e1 State: ACTIVE Priority: 0 Flags: NONE GenID: 0 SyncID: 1 ID: 3145384570 2. Name: da3p2 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 131072 Mode: r1w1e1 State: ACTIVE Priority: 1 Flags: NONE GenID: 0 SyncID: 1 ID: 3680403470 Geom name: swap2 State: COMPLETE Components: 2 Balance: load Slice: 4096 Flags: NONE GenID: 0 SyncID: 1 ID: 3352633282 Type: AUTOMATIC Providers: 1. Name: mirror/swap2 Mediasize: 8589934080 (8.0G) Sectorsize: 512 Mode: r1w1e0 Consumers: 1. Name: da5p2 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 131072 Mode: r1w1e1 State: ACTIVE Priority: 1 Flags: NONE GenID: 0 SyncID: 1 ID: 3811710265 2. Name: da6p2 Mediasize: 8589934592 (8.0G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 131072 Mode: r1w1e1 State: ACTIVE Priority: 0 Flags: NONE GenID: 0 SyncID: 1 ID: 982746877
12.0-ALPHA4 FreeBSD 12.0-ALPHA4 #0 r338426M
Did you follow the notes from the man page? Doing kernel dumps to gmirror providers is possible, but some conditions have to be met. First of all, a kernel dump will go only to one component and gmirror always chooses the component with the highest priority. Reading a dump from the mirror on boot will only work if the prefer balance algorithm is used (that way gmirror will read only from the component with the highest priority). If you use a different balance algorithm, you should add: gmirror configure -b prefer data to the /etc/rc.early script and: gmirror configure -b round-robin data to the /etc/rc.local script. The decision which component to choose for dumping is made when dumpon(8) is called. If on the next boot a component with a higher priority will be available, the prefer algorithm will choose to read from it and savecore(8) will find nothing. If on the next boot a component with the highest priority will be synchronized, the prefer balance algorithm will read from the next one, thus will find nothing there.
(In reply to Mark Johnston from comment #2) Since these gmirror's were created with the default balance algo (load), I'm guessing I should add a change to our clusteradm scripts. Should the default actually be "load" and not "data", I'm ignorant of the meaning and impact of this type of change.
(In reply to Sean Bruno from comment #3) Hmm, "data" isn't a load-balancing algorithm, it's just an example mirror name. I don't think there's any need to change the default algorithm ("load"); the man page just suggests temporarily changing the algorithm to "prefer" while savecore runs.
I think it'd be reasonable to mirror dump contents instead of requiring the user to hack around for savecore. Or perhaps just set a flag in the superblock. Or savecore needs automation to handle gmirror dumps in particular. My point is: there is still a bug here in that it is not automated.
(So I would like to reopen this, if that's ok with you (Sean) and Mark.)
(In reply to Conrad Meyer from comment #6) Oh for sure.