tech-lists wrote on 2019/03/18 16:25:> On Mon, Mar 18, 2019 at 09:08:31AM -0600, Alan Somers wrote: >> >> Do you mean using a zvol as the backing store for a VM?? If so, then: >> 1) Yes.? You can just do "zfs set volsize" on the host. >> 2) In theory no, but the guest may need to be rebooted to notice the >> change.? And I'm not sure if the current bhyve code will expose the >> new size without a reboot or not. >> 3) Sure.? But after you expand the zvol (or before you shrink it), >> you'll have to change the size of the guest's filesystem using the >> guest's native tools.I did it 2 month ago on FreeBSD 11.2. On the host with running guest: # zfs set volsize=200G tank1/vol1/bhyve/kotel/disk1 Even if I unmounted disk in the guest it still does not see the new size until I rebooted the guest. After reboot of the guest, you will see corrupted GPT: # gpart show -p vtbd1 => 40 209715120 vtbd1 GPT (200G) [CORRUPT] 40 8 - free - (4.0K) 48 1024 vtbd1p1 freebsd-boot (512K) 1072 976 - free - (488K) 2048 203423744 vtbd1p2 freebsd-ufs (97G) 203425792 6289368 - free - (3.0G) And after running recover, the guest will see the added space # gpart recover vtbd1 vtbd1 recovered # gpart show -p vtbd1 => 40 419430320 vtbd1 GPT (200G) 40 8 - free - (4.0K) 48 1024 vtbd1p1 freebsd-boot (512K) 1072 976 - free - (488K) 2048 203423744 vtbd1p2 freebsd-ufs (97G) 203425792 216004568 - free - (103G) After this, the partition can finally be enlarged # gpart resize -a 1M -s 197G -i 2 vtbd1 # growfs /vol0 Kind regards Miroslav Lachman
On Mon, Mar 18, 2019 at 06:56:03PM +0100, Miroslav Lachman wrote: [...] Thanks for the example, I've saved it. Ok just one other question, which I might have found the answer to, or might not. I'm new to this virtualising on zfs even though ive used zfs for years. It's basically: I made a zvol, installed 12-R into it,. Where the disks option came up I chose the auto defaults for *ZFS* in the guest. I also selected encryption for both the virtual disk and swap. I think perhaps I shouldn't have done all this together in the same vm because with apache running in it, httpd got wedged (and then everything got wedged. sync wouldn't return). I think the top zfs layer and the encryption layer and the zfs underneath got too busy. Happily the server still responded to a shutdown -r and came back up. It's scrubbing the zpool to be on the safe side. Am I correct? In that I should have used UFS in the guest rather than zfs? Or was it the encryption? thanks, -- J. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: <http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20190319/cc4494be/attachment.sig>