Tomáš Golembiovský
2020-Jun-16 14:34 UTC
Re: [Libguestfs] [PATCH virt-v2v] v2v: Allow temporary directory to be set on a global basis.
On Wed, 10 Jun 2020 11:31:33 +0100 "Richard W.M. Jones" <rjones@redhat.com> wrote:> I finally got access to the container. This is how it's configured: > > * / => an overlay fs. > > There is sufficient space here, and there are no "funny" restrictions, > to be able to create the libguestfs appliance. I proved this by > setting TMPDIR to a temporary directory under / and running > libguestfs-test-tool. > > There appears to be quite a lot of free space here, so in fact the > v2vovl files could easily be stored here too. (They only store the > conversion delta, not the full guest images.)The thing is that nobody can guarantee that there will be enough space. The underlying filesystem (not the mountpoint) is shared between all the containers running on the host. This is the reason why we have a PV on /var/tmp -- to make sure we have guaranteed free space.> > * /var/tmp => an NFS mount from a PVC > > This is a large (2T) external NFS mount.I assume that is the free space in the underlying filesystem. From there you should be guaranteed to "own" only 2GB (or something like that).> I actually started two pods > to see if they got the same NFS mount point, and they do. Also I > wrote files to /var/tmp in one pod and they were visible in the other. > So this seems shared.You mean you run two pods based on some YAML template or you run two pods from Kubevirt web UI? If you run the pods manually you may have reused the existing PV/PVC. It is the web UI that should provision you new scratch space for each pod. If that is not working then that is a bug in Kubevirt.> Also it uses root squash (so root:root is > mapped to 99:99).IMHO this is the main problem that I have been telling them about from the start. Thanks for confirming it. Using root squash on the mount is plain wrong. Tomas> For both reasons this cannot be used for the > appliance. If it was mounted at another location it might be used for > the v2vovl files. > > I've attached the exact mount details at the end of this email. > > My conclusion is that we could do one of two things: > > Either: > > (1) Easiest solution is simply not mount anything under /var/tmp, and > let it be local storage. Assuming all these containers are getting ~40G > of local storage, that's more than enough for virt-v2v to run and > store the appliance and overlays. Everything should just work once > you remove that /var/tmp mountpoint and leave it as local storage. > > ie these lines are removed: > - mountPath: /var/tmp > name: v2v-conversion-temp > > Or: > > (2) We could implement more fine-grained temporary directory control, > allowing the appliance and v2vovl* files to be placed separately. > However it would still be wrong to mount the place where libguestfs > creates the appliance (by default /var/tmp) on NFS. > > If you do this then you'd want to mount the large NFS storage > somewhere else, and there would be a new environment variable > (V2V_TMPDIR was my proposal IIRC) which you would point to the NFS > mount. /var/tmp would be local storage, and used for the appliance. > (There are other ways to do this if for some reason /var/tmp must be NFS.) > > Thanks Igor and Tomas for helping to get access to the environment. > > Rich. > > > Mount entries: > > overlay on / type overlay (rw,relatime,context="system_u:object_r:container_file_t:s0:c581,c761",lowerdir=/var/lib/containers/storage/overlay/l/R65BQQOII4EN66JKVROCRZX4DA:/var/lib/containers/storage/overlay/l/VK5ZPTQFJK7RG4DMBQ6IUDKVYS:/var/lib/containers/storage/overlay/l/QNYZ757HCAAQMJJZUZ6D452CSS,upperdir=/var/lib/containers/storage/overlay/76d93cb1256f566100ec2a7e5b5c4b84acc0bfa6a3cb4ebe0adbdb4a0ffc1a9c/diff,workdir=/var/lib/containers/storage/overlay/76d93cb1256f566100ec2a7e5b5c4b84acc0bfa6a3cb4ebe0adbdb4a0ffc1a9c/work) > > [nfs-server]:/NFSv4_vol_cnv/ibragins-2-3.cnv-qe.rhcloud.com.pvs/pv9 on /var/tmp type nfs4 (rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.38,local_lock=none,addr=10.9.96.20) > > > # df -h > Filesystem Size Used Avail Use% Mounted on > overlay 40G 17G 24G 41% / > tmpfs 64M 0 64M 0% /dev > tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup > shm 64M 0 64M 0% /dev/shm > tmpfs 3.9G 7.2M 3.9G 1% /etc/hostname > tmpfs 3.9G 4.0K 3.9G 1% /data/input > devtmpfs 3.9G 0 3.9G 0% /dev/kvm > /dev/mapper/coreos-luks-root-nocrypt 40G 17G 24G 41% /etc/hosts > [nfs-server]:/NFSv4_vol_cnv/ibragins-2-3.cnv-qe.rhcloud.com.pvs/pv9 2.0T 945G 1002G 49% /var/tmp > [nfs-server]:/NFSv4_vol_cnv/ibragins-2-3.cnv-qe.rhcloud.com.pvs/pv15 2.0T 945G 1002G 49% /data/vm/disk1 > tmpfs 3.9G 24K 3.9G 1% /run/secrets/kubernetes.io/serviceaccount > > > -- > Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones > Read my programming and virtualization blog: http://rwmj.wordpress.com > Fedora Windows cross-compiler. Compile Windows programs, test, and > build Windows installers. Over 100 libraries supported. > http://fedoraproject.org/wiki/MinGW >-- Tomáš Golembiovský <tgolembi@redhat.com>
Richard W.M. Jones
2020-Jun-16 21:06 UTC
Re: [Libguestfs] [PATCH virt-v2v] v2v: Allow temporary directory to be set on a global basis.
On Tue, Jun 16, 2020 at 04:34:15PM +0200, Tomáš Golembiovský wrote:> On Wed, 10 Jun 2020 11:31:33 +0100 > "Richard W.M. Jones" <rjones@redhat.com> wrote: > > > I finally got access to the container. This is how it's configured: > > > > * / => an overlay fs. > > > > There is sufficient space here, and there are no "funny" restrictions, > > to be able to create the libguestfs appliance. I proved this by > > setting TMPDIR to a temporary directory under / and running > > libguestfs-test-tool. > > > > There appears to be quite a lot of free space here, so in fact the > > v2vovl files could easily be stored here too. (They only store the > > conversion delta, not the full guest images.) > > The thing is that nobody can guarantee that there will be enough space. > The underlying filesystem (not the mountpoint) is shared between all the > containers running on the host. This is the reason why we have a PV on > /var/tmp -- to make sure we have guaranteed free space.This must surely be a problem for all containers? Do containers behave semi-randomly when the host starts to run out of space? All containers must have to assume that there's some space available in /tmp or /var/tmp surely. If we can guarantee that each container has 1 or 2 G of free space (doesn't seem unreasonable?) then virt-v2v should work fine and won't need any NFS mounts.> > * /var/tmp => an NFS mount from a PVC > > > > This is a large (2T) external NFS mount. > > I assume that is the free space in the underlying filesystem. From there > you should be guaranteed to "own" only 2GB (or something like that). > > > I actually started two pods > > to see if they got the same NFS mount point, and they do. Also I > > wrote files to /var/tmp in one pod and they were visible in the other. > > So this seems shared. > > You mean you run two pods based on some YAML template or you run two > pods from Kubevirt web UI?Two from a yaml template, however ...> If you run the pods manually you may have > reused the existing PV/PVC. It is the web UI that should provision you > new scratch space for each pod. If that is not working then that is a > bug in Kubevirt.... the PVC name was "v2v-conversion-temp" (and not some randomly generated name) suggesting that either the user must enter a new name every time or else they're all going to get the same NFS mount. Can you explain a bit more about how they get different mounts?> > Also it uses root squash (so root:root is > > mapped to 99:99). > > IMHO this is the main problem that I have been telling them about from > the start. Thanks for confirming it. Using root squash on the mount is > plain wrong.This is definitely the main problem, and is the direct cause of the error you were seeing. I'm still not very confident that our locking will work reliably if two virt-v2v instances in different containers or pods see a shared /var/tmp. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-df lists disk usage of guests without needing to install any software inside the virtual machine. Supports Linux and Windows. http://people.redhat.com/~rjones/virt-df/
Tomáš Golembiovský
2020-Jun-17 18:00 UTC
Re: [Libguestfs] [PATCH virt-v2v] v2v: Allow temporary directory to be set on a global basis.
On Tue, 16 Jun 2020 22:06:58 +0100 "Richard W.M. Jones" <rjones@redhat.com> wrote:> On Tue, Jun 16, 2020 at 04:34:15PM +0200, Tomáš Golembiovský wrote: > > On Wed, 10 Jun 2020 11:31:33 +0100 > > "Richard W.M. Jones" <rjones@redhat.com> wrote: > > > > > I finally got access to the container. This is how it's configured: > > > > > > * / => an overlay fs. > > > > > > There is sufficient space here, and there are no "funny" restrictions, > > > to be able to create the libguestfs appliance. I proved this by > > > setting TMPDIR to a temporary directory under / and running > > > libguestfs-test-tool. > > > > > > There appears to be quite a lot of free space here, so in fact the > > > v2vovl files could easily be stored here too. (They only store the > > > conversion delta, not the full guest images.) > > > > The thing is that nobody can guarantee that there will be enough space. > > The underlying filesystem (not the mountpoint) is shared between all the > > containers running on the host. This is the reason why we have a PV on > > /var/tmp -- to make sure we have guaranteed free space. > > This must surely be a problem for all containers? Do containers > behave semi-randomly when the host starts to run out of space? All > containers must have to assume that there's some space available in > /tmp or /var/tmp surely.My understanding is that the container should not require significant amount of free space for runtime. If you need a permanent data storage or larger amount of temporary storage you need to use a volume.> > If we can guarantee that each container has 1 or 2 G of free space > (doesn't seem unreasonable?) then virt-v2v should work fine and won't > need any NFS mounts.NFS is just one of the methods to provision a volume. There is also a host-local provisioner that makes sure there is enough space on the local storage -- which seems what you are suggesting -- but I do not recall what were the arguments against using it.> > > * /var/tmp => an NFS mount from a PVC > > > > > > This is a large (2T) external NFS mount. > > > > I assume that is the free space in the underlying filesystem. From there > > you should be guaranteed to "own" only 2GB (or something like that). > > > > > I actually started two pods > > > to see if they got the same NFS mount point, and they do. Also I > > > wrote files to /var/tmp in one pod and they were visible in the other. > > > So this seems shared. > > > > You mean you run two pods based on some YAML template or you run two > > pods from Kubevirt web UI? > > Two from a yaml template, however ... > > > If you run the pods manually you may have > > reused the existing PV/PVC. It is the web UI that should provision you > > new scratch space for each pod. If that is not working then that is a > > bug in Kubevirt. > > ... the PVC name was "v2v-conversion-temp" (and not some randomly > generated name) suggesting that either the user must enter a new name > every time or else they're all going to get the same NFS mount. > Can you explain a bit more about how they get different mounts?Oh, I see. That seems wrong then and is probably a bug in Kubevirt web UI. Do you still have access to the testing environment?> > > > Also it uses root squash (so root:root is > > > mapped to 99:99). > > > > IMHO this is the main problem that I have been telling them about from > > the start. Thanks for confirming it. Using root squash on the mount is > > plain wrong. > > This is definitely the main problem, and is the direct cause of the > error you were seeing. I'm still not very confident that our locking > will work reliably if two virt-v2v instances in different containers > or pods see a shared /var/tmp.They should not. If they do then it is a bug somewhere else and not in virt-v2v. Tomas> > Rich. > > -- > Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones > Read my programming and virtualization blog: http://rwmj.wordpress.com > virt-df lists disk usage of guests without needing to install any > software inside the virtual machine. Supports Linux and Windows. > http://people.redhat.com/~rjones/virt-df/ >-- Tomáš Golembiovský <tgolembi@redhat.com>
Seemingly Similar Threads
- Re: [PATCH virt-v2v] v2v: Allow temporary directory to be set on a global basis.
- Re: [PATCH virt-v2v] v2v: Allow temporary directory to be set on a global basis.
- Re: [PATCH virt-v2v] v2v: Allow temporary directory to be set on a global basis.
- Re: [PATCH virt-v2v] v2v: Allow temporary directory to be set on a global basis.
- [PATCH virt-v2v] v2v: Allow temporary directory to be set on a global basis.