Running bsdinstall on an existing system can make the system fail and be unbootable without manual console intervention. Script /usr/src/usr.sbin/bsdinstall/scripts/zfsboot line 804 forces export of all existing zpools. There is NO warning of this, only that the disk/partition being set up will be destroyed. The ASSumption seems to be that bsdinstall is running on new hardware with the install medium as the only existing filesystem. Running bsdinstall on a working system to create a new system disk risks disaster. If the existing system has only a root zpool with all fbsd components, disaster is averted because the export fails. But if it has a root zpool and a separate zpool for /usr, /var and so on, the system immediately fails, the session drops, and no new login is possible. What's worse, power cycling will not let the system come up, because the non-root zpool has been exported. If console access is available the operator can boot to single user and re-import the needed zpool. Otherwise, the system is out of service indefinitely. The basic issue here is that it's all but impossible to tell if there is a zpool on the install target, because the target may be described in a zpool status as either a device like da3p2 or diskid/..... or gpt/..... or ... . I certainly cannot predict how fbsd will choose to describe a given disk/partition. And once the system chooses a diskid/... the da3p2 description disappears. I have another system with a mirrored zpool where one of two identical disks is listed in the diskid form and the other in the da3p2 form. At the very minimum bsdinstall should not do zpool export -F, but just plain export if it's done at all. Or just display the zpool list to the user and let the user decide whether there's a pool on the disk about to be destroyed.