FreeBSD 10.4 release is something like 22GB (with much free space) and fits in the free tier of GCE (30GB disk), meaning I can have a FreeBSD machine running 24/7 for free. FreeBSD 11.2 image size is now > 32 GB so I cannot install it directly... Can we make it fit under the 30 GB threshold?
Yes, easily. The problem is the default VMSIZE (the size of just the UFS root partition) in release/Makefile.vm was bumped to 30G in r330033) and the extra swap and boot partitions add just a little bit more to that. It could be dropped down to 27G or so to make ~30G images.
Is this initial size relevant anyway, since we have growfs run on first boot? The root UFS partition is the last one, so the size shouldn't matter, it all depends on the actual disk the image is burnt onto if I understand well. I'm in for reducing the size to 27GB nevertheless if it still serve the purpose which was expected with the 'recent' 20GB->30GB increase in release/Makefile.vm
(In reply to Sylvain Garrigues from comment #2) +100
The size was increased so as to have enough space to download the sources, build and install a new kernel (while saving the old kernel), and to buildworld and installworld. The previous size did not allow this and growfs does not work inside a virtual machine spun up in bhyve. Thus some effort is required to use the weekly snapshots if they are going to be used for development with a smaller size. Reducing the size to 27Gb is probably going to (barely) still be enough to build a complete system, but it will be really tight and almost certainly be forced to grow again before too long.
(In reply to Kirk McKusick from comment #4) Well I am using the GCE machines for the exact same thing: I am building arm64 kernels and worlds, I am packaging with them (`make packages`) and I am also making snapshots of them. I have a few months history, keeping old worlds and images. And yet, on the 30GB GCE machine, I have more than 10GB free space left: [root@dev ~]# df -h Filesystem Size Used Avail Capacity Mounted on /dev/gpt/rootfs 28G 15G 11G 59% / devfs 1.0K 1.0K 0B 100% /dev [root@dev ~]# du -hs /usr/src/ /usr/obj 3.2G /usr/src/ 4.7G /usr/obj [root@dev ~]# gpart show da0 => 3 62914549 da0 GPT (30G) 3 32 1 freebsd-boot (16K) 35 2097152 2 freebsd-swap (1.0G) 2097187 60817365 3 freebsd-ufs (29G) Granted, there are some big parts (like ZFS and bhyve) that I am not building: [root@dev ~]# cat /etc/src.conf MALLOC_PRODUCTION=YES WITH_CCACHE_BUILD=YES WITHOUT_TESTS=YES WITHOUT_BHYVE=YES WITHOUT_PROFILE=YES WITHOUT_ZFS=YES WITHOUT_DEBUG_FILES=YES I am not sure if it would fill up the 11GB of free space (I do take your word on it) but yet your use case (build world and kernel in a bhyve machine) doesn't seem compatible with mine (having Google Cloud machines running FreeBSD for free, like it used to be before gcj committed the change). I see three solutions: 1/ keep the root partition to a more reasonable size and make growfs and bhyve work together (the cleanest solution) 2/ reduce temporarily the size by a few GB (maybe losing 2GB is enough, my making VMSIZE=28GB) so that anybody can try and install the GCE images on Google Cloud. 3/ have special treatment for cloud images... most other Unix images are around 10GB, why is FreeBSD 32GB, like a Microsoft Windows image? From https://console.cloud.google.com/compute/images: debian-9-tf-nightly-v20181017 10 Go Debian coreos-stable-1855-4-0-v20180911 9 Go CoreOS centos-7-v20181011 10 Go CentOS rhel-7-v20181011 10 Go RedHat sles-15-v20180816 10 Go SUSE Linux Enterprise ubuntu-1804-bionic-v20181003 10 Go Canonical windows-server-1803-dc-core-v20181009 32 Go Microsoft FreeBSD ? 32GB. Glen, what do you think?
I must add that, to be able to use FreeBSD 11 (or CURRENT) on Google Cloud with the default 30GB disk, I had to first install the old FreeBSD 10.4 image, which is hopefully still available (until when?), then freebsd-update to 11.2. Installing directly the 11.2 release with the instructions given in the handbook / FreeBSD site fails and result in an error saying the image is too big. Just saying this as I think it is a quite common use case, and I find it annoying FreeBSD cannot be installed with default machine settings. Plus having the project weekly upload this big images to the cloud seems overkill, when a 10GB one would be enough (followed by growfs).
(In reply to Sylvain Garrigues from comment #5) I have looked more closely, and I do not need the full 30GB. Indeed I would be fine with 27Gb or even 25Gb. My original request to Glen was to increase it to 22Gb, so the increase to 30Gb was just to avoid being nibbled to death by 2Gb increases. So keeping it at a size that allows it to be run for free will not impact building systems for at least several years. Unless we fall into the Linux success-disaster trap of adding a million lines of code a year to the release...
(In reply to Kirk McKusick from comment #7) Thanks, then I guess Glen could reduce the size a little with a new commit. But still, I am wondering why we keep VM image sizes this big. Not everybody needs to download the FreeBSD sources and recompile the system. I believe we could stick to a reasonable size (maybe even 10GB) and then anybody who feels that it is too small can use growfs, whether it is on the cloud on a virtual disk (like me) or a local disk prior to installing with something like: # truncate -s 100G <new_vm_img> # dd if=<old_vm_img> of=<new_vm_img> conv=notrunc What do you think?
I just hope this will be "solved" before 12.0-RELEASE is out, so that anybody can use the new release on Google Cloud.
(In reply to Sylvain Garrigues from comment #9) > I just hope this will be "solved" before 12.0-RELEASE is out, so that > anybody can use the new release on Google Cloud. To be honest, the response in comment #5 is why the images were bumped to 30GB. As the base system grows, these images will also need to grow, as is also the case for the amd64 disc1.iso for example. This was also the case for arm SD card images as well. Having said that, I personally do not see a "bug" here, necessarily, as a 30GB images is eventually going to be increased at some point even futher.
(In reply to Glen Barber from comment #10) I agree it is not a bug per se, but rather a comment on the commit which increased the size of all VMs by 10GB (50% increase) per Kirk’s request who noticed free space was getting low on a 20GB disk for actual people who do want to download the full source history and recompile both world and kernel and want to keep some kernel and cannot or don’t want to mess with growfs (that’s a bunch of conditions) Kirk admits he doesn’t need the full 30GB (anything >25GB seems fine) and I believe 27GB for instance is plenty enough while still allowing to use default disk settings of 30GB (which happens when you click on `Launch this [FreeBSD] image’) - and in addition anything under 30GB is free if you use a low resource Google Cloud machine. So my comment/request on your commit, if you wish to honor it, is ‘can you solve Kirk’s problem with VMSIZE=27GB instead of the 30GB you chose, or even better maybe can you make introduce VMSIZE-GCE or VMSIZE-CLOUDWARE = 10GB for instance so that cloud images we upload to Amazon, Google and Azure are smaller and adapted to the size of the disk they are installed onto since growfs *does* its job there?’
Just realized that, for EC2 images, UFS partition size is overriden in release/tools/ec2.conf per Colin Percival commit. He wrote: # Build with a 3 GB UFS partition; the growfs rc.d script will expand # the partition to fill the root disk after the EC2 instance is launched. # Note that if this is set to <N>G, we will end up with an <N+1> GB disk # image since VMSIZE is the size of the UFS partition, not the disk which # it resides within. export VMSIZE=3072M (see https://reviews.freebsd.org/source/src/browse/head/release/tools/ec2.conf$14) Since GCE images seems to require src and ports extracted per Google's Marketplace (Amazon do not require them), I would see no reason not to add VMSIZE=10GB or 15GB in release/tools/gce.conf It would enable people to use FreeBSD within the Always Free program of Google Cloud (which gives 30GB storage for free) - it is not possible right now because of the ~32GB FreeBSD 11.2 image size.
(In reply to Glen Barber from comment #10) If there is an easy way to create 10Gb images in addition to the 30Gb images for the folks that want to use 10Gb images on the cloud, then that would be my first choice. Otherwise, I would be in favor of having you reduce the 30Gb image size to 27Gb as that should still give us several years of growth. And hopefully by the time growth is needed, the size of the "free" VMs will also have increased.
(In reply to Kirk McKusick from comment #13) > (In reply to Glen Barber from comment #10) > If there is an easy way to create 10Gb images in addition to the 30Gb images > for the folks that want to use 10Gb images on the cloud, then that would be > my first choice. > Because of how the upload works, in particular the naming convention, "family name", and other metadata, this not particularly easy. But regardless, we are too far into a current release cycle for changes with which I am not comfortable making at this point of the cycle. > Otherwise, I would be in favor of having you reduce the 30Gb image size to > 27Gb as that should still give us several years of growth. And hopefully by > the time growth is needed, the size of the "free" VMs will also have > increased. I'll do some testing to see what the VMSIZE value should be to end up with a 30GB image. I should have something committed to head today, after which I'll send re@ an request for approval to merge to stable/12 and stable/11.
(In reply to Glen Barber from comment #14) Hi Glen, like I said in comment #12, you may leave the VMSIZE setting as is in the global Makefile.vm and instead add export VMSIZE=10240M in release/tools/gce.conf just like Colin Percival did in release/tools/ec2.conf
A commit references this bug: Author: gjb Date: Wed Oct 24 15:51:55 UTC 2018 New revision: 339684 URL: https://svnweb.freebsd.org/changeset/base/339684 Log: Reduce the GCE image size to 27G to be lower than the free quota limit. PR: 232313 MFC after: 3 days Sponsored by: The FreeBSD Foundation Changes: head/release/tools/gce.conf
(In reply to Sylvain Garrigues from comment #15) > (In reply to Glen Barber from comment #14) > > Hi Glen, like I said in comment #12, you may leave the VMSIZE setting as is > in the global Makefile.vm and instead add > > export VMSIZE=10240M > > in release/tools/gce.conf > > just like Colin Percival did in release/tools/ec2.conf What I meant by my comment #14 was ensuring 27GB was in fact a small enough value to end up with an image smaller than 30GB.
Thanks Glen, that will solve the issue and I can close the bug. At some point in the future we may want to get our cloud VM size somewhat more consistent as the Amazon's one has VMSIZE=3GB (release/tools/ec2.conf) and now Google's one has VMSIZE=27GB (release/tools/ec2.conf) while both support growfs - that can be optimized later.
A commit references this bug: Author: gjb Date: Thu Oct 25 15:11:19 UTC 2018 New revision: 339724 URL: https://svnweb.freebsd.org/changeset/base/339724 Log: MFC r339684: Reduce the GCE image size to 27G to be lower than the free quota limit. PR: 232313 Approved by: re (kib) Sponsored by: The FreeBSD Foundation Changes: _U stable/12/ stable/12/release/tools/gce.conf
(In reply to Kirk McKusick from comment #4) > growfs does not work inside a virtual machine spun up in bhyve Do you have a link to more information on that Kirk? That seems like a serious bug.
A commit references this bug: Author: gjb Date: Thu Oct 25 15:14:16 UTC 2018 New revision: 339725 URL: https://svnweb.freebsd.org/changeset/base/339725 Log: MFC r339684: Reduce the GCE image size to 27G to be lower than the free quota limit. PR: 232313 Sponsored by: The FreeBSD Foundation Changes: _U stable/11/ stable/11/release/tools/gce.conf
(In reply to Ed Maste from comment #20) >> growfs does not work inside a virtual machine spun up in bhyve >Do you have a link to more information on that Kirk? That seems like a serious bug. I tried to reproduce that with vm-bhyve and FreeBSD 11.2 and it worked as expected. I'd also vote for reducing the size for cloud images for something closer to EC2's 3GB since growfs just works. # vm img http://ftp.freebsd.org/pub/FreeBSD/releases/VM-IMAGES/11.2-RELEASE/amd64/Latest/FreeBSD-11.2-RELEASE-amd64.raw.xz /zroot/vm/.img/FreeBSD-11.2-RELEASE-amd64.raw. 287 MB 10 MBps 27s # vm img DATASTORE FILENAME default FreeBSD-11.2-RELEASE-amd64.raw # vm create -t freebsd -c 4 -m 2048 -i FreeBSD-11.2-RELEASE-amd64.raw -s 200G my-freebsd # vm start -f my-freebsd /boot/kernel/kernel text=0x1547b08 data=0x143f30+0x4bc418 syms=[0x8+0x16ad00+0x8+0x183cac] Booting... [sniped boot log] Thu Jan 10 10:23:09 UTC 2019 FreeBSD/amd64 (freebsd) (ttyu0) login: root Jan 10 10:23:14 freebsd login: ROOT LOGIN (root) ON ttyu0 FreeBSD 11.2-RELEASE (GENERIC) #0 r335510: Fri Jun 22 04:32:14 UTC 2018 Welcome to FreeBSD! [sniped motd] root@freebsd:~ # df -h Filesystem Size Used Avail Capacity Mounted on /dev/gpt/rootfs 29G 1.5G 25G 5% / devfs 1.0K 1.0K 0B 100% /dev root@freebsd:~ # /etc/rc.d/growfs onestart Growing root partition to fill device vtbd0 recovered vtbd0p3 resized gpart: arg0 'gpt/rootfs': Invalid argument super-block backups (for fsck_ffs -b #) at: 64112192, 65394432, 66676672, 67958912, 69241152, 70523392, 71805632, 73087872, 74370112, 75652352, 76934592, 78216832, 79499072, 80781312, 82063552, 83345792, 84628032, 85910272, [sniped growfs log] WARNING: /: reload pending error: blocks 0 files 1 root@freebsd:~ # df -h Filesystem Size Used Avail Capacity Mounted on /dev/gpt/rootfs 193G 1.5G 176G 1% / devfs 1.0K 1.0K 0B 100% /dev
(In reply to Mateusz Kwiatkowski from comment #22) Trying to reproduce your test. The problem that I have is that I cannot increase the size of the virtual disk in my virtual machine. What is the program `vm' that you are using? I use `/usr/share/examples/bhyve/vmrun.sh' and it does not have an option to set the disk size in the started up virtual machine.
(In reply to Kirk McKusick from comment #23) For vmrun.sh, you can probably just use 'truncate -s 30g myvm.img' to grow the size of the image used by Bhyve.
(In reply to Conrad Meyer from comment #24) Thanks, doing the 'truncate -s 30g myvm.img' did the trick and I was able to then successfully growfs the filesystem. I did cause a panic because of having active snapshots, but that is a different bug that I know how to fix (and will do so shortly). So, bottom line is that I agree that it is entirely reasonable to make the images much smaller.
(In reply to Kirk McKusick from comment #25) > What is the program `vm' that you are using? It's provided by package vm-bhyve which I recommend. :-) > So, bottom line is that I agree that it is entirely reasonable to make the images much smaller. Thank you very much! Can we then re-open this issue and make it actionable?
(In reply to Mateusz Kwiatkowski from comment #26) Perhaps submit a new PR to track the ~3GB request since this PR was specifically about being above the free quota.
A commit references this bug: Author: gjb Date: Tue Apr 30 14:29:09 UTC 2019 New revision: 346959 URL: https://svnweb.freebsd.org/changeset/base/346959 Log: Reduce the default image size for virtual machine disk images from 30GB to 3GB. The raw images can be resized using truncate(1), and other formats can be resized with tools included with other tools included with other hypervisors. Enable the growfs(8) rc(8) at firstboot if the disk was resized prior to booting the virtual machine for the first time. Discussed with: several PR: 232313 (requested in other context) MFC after: 3 days Sponsored by: The FreeBSD Foundation Changes: head/release/Makefile.vm head/release/tools/gce.conf head/release/tools/vmimage.subr
(In reply to commit-hook from comment #28) Hi. I think we need to increase a little more the initial vm size of other images like Vagrant ones. Bootstrap is not working anymore, at least in Vagrant virtualbox 12.0-STABLE and 13-CURRENT. vagrant@freebsd:~ % uname -UK 1200510 1200510 vagrant@freebsd:~ % df -h Filesystem Size Used Avail Capacity Mounted on /dev/gpt/rootfs 2.9G 2.6G 35M 99% / devfs 1.0K 1.0K 0B 100% /dev root@freebsd:~ # pkg info The package management tool is not yet installed on your system. Do you want to fetch and install it now? [y/N]: N # pkg bootstrap -y Bootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:12:amd64/latest, please wait... Verifying signature with trusted certificate pkg.freebsd.org.2013102301... done Installing pkg-1.10.5_5... Extracting pkg-1.10.5_5: 100% # pkg install -y firstboot-freebsd-update firstboot-pkgs sudo rsync virtualbox-ose-additions-nox11 Updating FreeBSD repository catalogue... FreeBSD repository is up to date. All repositories are up to date. The following 8 package(s) will be affected (of 0 checked): New packages to be INSTALLED: firstboot-freebsd-update: 1.2 firstboot-pkgs: 1.5 sudo: 1.8.27_1 rsync: 3.1.3 virtualbox-ose-additions-nox11: 5.2.30 gettext-runtime: 0.19.8.1_2 indexinfo: 0.3.1 libiconv: 1.14_11 Number of packages to be installed: 8 The process will require 10 MiB more space. 2 MiB to be downloaded. pkg: Not enough space in /var/cache/pkg, needed 2239 KiB available -29 MiB root@freebsd:~ # df -h Filesystem Size Used Avail Capacity Mounted on /dev/gpt/rootfs 2.9G 2.7G -29M 101% / devfs 1.0K 1.0K 0B 100% /dev root@freebsd:~ # cat /etc/rc.conf hostname="freebsd" firstboot_freebsd_update_enable=YES firstboot_pkgs_enable=YES firstboot_pkgs_list="sudo rsync virtualbox-ose-additions-nox11" vboxguest_enable="YES" vboxservice_enable="YES" ifconfig_DEFAULT="SYNCDHCP" sshd_enable="YES" sendmail_enable="NO" sendmail_submit_enable="NO" sendmail_outbound_enable="NO" sendmail_msp_queue_enable="NO" There is no growfs_enable in it as well.
(In reply to Danilo G. Baio from comment #29) The issue is that there is no growfs_enable. If there were and the image had been increased in size before its first boot, then it would increase in size and would work as desired. The point of the current size is that it is as large as it can be to run in the `free' mode on Amazon cloud.
(In reply to Kirk McKusick from comment #30) > (In reply to Danilo G. Baio from comment #29) > The issue is that there is no growfs_enable. Oddly, growfs_enable is set in the rc.conf on the raw image resulting from the build. # cat /etc/rc.conf hostname="freebsd" ntpd_enable=YES sshd_enable=YES growfs_enable=YES firstboot_pkgs_enable=YES firstboot_freebsd_update_enable=YES google_startup_enable=YES google_accounts_daemon_enable=YES google_clock_skew_daemon_enable=YES google_instance_setup_enable=YES google_network_daemon_enable=YES dumpdev="AUTO" ifconfig_DEFAULT="SYNCDHCP mtu 1460" ntpd_sync_on_start="YES" # need to fill in something here #firstboot_pkgs_list="" panicmail_autosubmit="YES" So, I'm scratching my head over this at the moment...
(In reply to Glen Barber from comment #31) > (In reply to Kirk McKusick from comment #30) > > (In reply to Danilo G. Baio from comment #29) > > The issue is that there is no growfs_enable. > > Oddly, growfs_enable is set in the rc.conf on the raw image resulting from > the build. > [...] > > So, I'm scratching my head over this at the moment... Oops. I did not notice the new additions to this PR were related to Vagrant, not GCE. So, ignore my comment..
(In reply to Danilo G. Baio from comment #29) > (In reply to commit-hook from comment #28) > > Hi. > > I think we need to increase a little more the initial vm size of other > images like Vagrant ones. > Thank you for the report. After finally realizing this was regarding Vagrant (I only initially skimmed the details of the report), I had mistakently thought this was a followup regarding GCE images (based on the subject of the PR). I'll take a look at this, though.
(In reply to Glen Barber from comment #33) Sorry Glen, I should have opened a new PR, it's now open in #238226. And thanks for your time.