Bug 217195 - bsdinstall: ZFS grabs all the HDD in the system falsely and fails when more 5 HDD is in the system
Summary: bsdinstall: ZFS grabs all the HDD in the system falsely and fails when more 5...
Status: New
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: CURRENT
Hardware: Any Any
: --- Affects Many People
Assignee: freebsd-sysinstall (Nobody)
URL:
Keywords: install
Depends on:
Blocks:
 
Reported: 2017-02-18 12:38 UTC by mikhail.rokhin
Modified: 2022-10-13 17:47 UTC (History)
1 user (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description mikhail.rokhin 2017-02-18 12:38:36 UTC
1. In VBOX create a machine with 5 VHD only
2. Boot from install iso of either CURRENT or 11 stable
3. Choose AutoZFS
4. Choose RAIDZ1 with 3 disks only (don't chose all 5)
5. Follow other steps, reboot
6. Login the fresh system and say "Wow!" - you've got RAIDZ3 instead of chosen RAIDZ1 and 5 disks attached instead of 3 chosen!!
7. Shutdown the VBox and add another 7 VHD to system
8. Boot the system and see its failure

Summary: AutoZFS of RAIDZ{1,2,3} fails when you choose more than 3 disks during install and fails when system has more than 5 disks at the install.

So, ZFS automatically grabs all the disks in the system by default, which is very wrong behavior, failure actually.

To see boot failure just add all the 11 disks to RAIDZ3 AutoZFS during install and boot in after install is complete.
Comment 1 mikhail.rokhin 2017-02-18 13:20:11 UTC
But, when you choose RAIDZ1 and 5 disks at install, it leads to RAIDZ1 with 5 disks, not changing to RAIDZ3 automatically as in case of 3 disks chosen at install.
Comment 2 mikhail.rokhin 2017-02-18 15:21:58 UTC
There is another problem either - whilst ZFS falsely grabs all VHD it fails in the scheme:
- there was RAIDZ1 with 5 VHD disks
- Reinstall into the same system, but choosing RAID10 with just 4 disks leads to "error 2" during mounting root ZFS on boot, because of uncleaned remnants of previous system on 5th not chosen manually VHD...

It turns out that actually user can't chose the amount of disks.
Comment 3 Allan Jude freebsd_committer freebsd_triage 2017-02-19 02:47:51 UTC
I am unable to reproduce this:

1. Create new vbox machine with 5 disks
2. boot 12.0 snapshot 20170210 install iso
3. choose autozfs
4. raid-z1 with 3 of 5 disks selected
5. install, reboot, zpool is z1 with only 3 disks, other 2 disks are not partitioned or used.

When you have selected your disks, and you proceed to start the install, there is the 'last chance to cancel' dialogue, where it lists the disks that will be reformatted. Does it list 3 or 5 for you?

If I am not mistaken, the emulated BIOS in VirtualBox only supports booting from 8 disks.
Comment 4 mikhail.rokhin 2017-02-19 13:14:34 UTC
(In reply to Allan Jude from comment #3)

Right you are! I've forgotten I used the all 11 VHD in previous tests with RAIDZ3, so it turns out that disks are still uncleaned, because being erased are the only you chosen (3 for the case).

There must be option either in bsdinstall or ZFS to use strictly the chosen amount of disks, not all available in the system.
Comment 5 Allan Jude freebsd_committer freebsd_triage 2017-02-19 16:36:22 UTC
(In reply to mikhail.rokhin from comment #4)
I am guessing the complication was caused by the fact that you have the same pool name.

So you had:
3 disks as a RAID-Z1 called zroot
8 of 11 old disks as a RAID-Z3 called zroot

And you booted and it selected the wrong zpool.

Anyway, if you feel there is actually an issue here, can you describe it in more detail, as I am unclear how the installer is doing anything wrong.

The installer never touches disks you don't specifically tell it to write to.
Comment 6 mikhail.rokhin 2017-02-20 07:08:45 UTC
(In reply to Allan Jude from comment #5)

The problem is that ZFS grabs all disks, instead of grabbing the chosen only. I suppose, that it might be some option set or not set for ZFS during install: whatever leftovers are on all 11 disks, use chosen 3 only, for e.g., never touch the other 8 disks, just be withing given 3 disks.

May ZFS automatically search all disks for volumes? It should be switched off then.
Comment 7 Allan Jude freebsd_committer freebsd_triage 2017-02-20 07:31:16 UTC
(In reply to mikhail.rokhin from comment #6)
ZFS doesn't automatically do anything.

I think you are confusing the results.

Do an install and select only 3 disks.

During the confirmation dialog that looks like this:

https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/bsdinstall/bsdinstall-zfs-warning.png

You should see only 3 disks listed. If you see more, then you selected more.


Try using a different zpool name, to make sure you are not getting mixed results from some leftover old zpool that might exist on these 11 drives.
Comment 8 mikhail.rokhin 2017-02-20 07:52:48 UTC
(In reply to Allan Jude from comment #7)
That is the problem - either to properly clean all the disks in the system, or to find a way for ZFS use strictly chosen disks, whatever leftovers are on the rest of disks.

Now it looks and works paralogic: I choose certain 3 disks, but ZFS on its own search all disks for the pool. 

What if I need two pools of the same name within 11 disks? Should we notify ZFS developers about such failure?

Checked RAIDZ3 for 7 clean disks - it works, 

but for 11 clean disks fails: ZFS i/o error - all block copies unavailable  can't find /boot/zfsloader


RAIDZ2 for 6 clean disks - works fine

but for 10 clean disks - ZFS i/o error - all block copies unavailable
Comment 9 mikhail.rokhin 2017-02-20 09:39:31 UTC
(In reply to Allan Jude from comment #7)

All disks in the research are 2Tb capacity in VirtualBox



Checked RAIDZ1 for 5 clean disks - it works, 

but for 9 clean disks fails: ZFS i/o error - all block copies unavailable  can't find /boot/zfsloader


RAID10 for 12-6 clean disks - 
 - fails: ZFS i/o error - all block copies unavailable
 - works only for 4 and less disks


RAID Mirror for 12 and less clean disks - works fine


RAID Stripe for 12-6 clean disks - 
 - fails: ZFS i/o error - all block copies unavailable
 - with 5 disks boots but fails to find kernel
 - works only for 4 and less disks



Could you make so it comes possible to include in array any number of disks of any capacity? 

Or create the threshold around 128 disks or more?


NB Actually the way ZFS behave with uncleaned disks is vulnerability and exploit - one can add dirty disk and it will ruin the system after reboot...