Displaying 20 results from an estimated 20 matches for "cfme".
Did you mean:
came
2020 Aug 05
0
Re: More parallelism in VDDK driver (was: Re: CFME-5.11.7.3 Perf. Tests)
On Wed, Aug 5, 2020 at 3:28 PM Richard W.M. Jones <rjones@redhat.com> wrote:
>
>
> Nir, BTW what are you using for performance testing?
virt-v2v with local image, or imageio client with local image.
> As far as I can tell it's not possible to make qemu-img convert use
> multi-conn when connecting to the source (which is going to be a
> problem if we want to use this
2020 Aug 05
0
Re: More parallelism in VDDK driver (was: Re: CFME-5.11.7.3 Perf. Tests)
Here are some results anyway. The command I'm using is:
$ ./nbdkit -r -U - vddk \
libdir=/path/to/vmware-vix-disklib-distrib \
user=root password='***' \
server='***' thumbprint=aa:bb:cc:... \
vm=moref=3 \
file='[datastore1] Fedora 28/Fedora 28.vmdk' \
--run 'time /var/tmp/threaded-reads $unixsocket'
Source for threaded-reads is
2020 Aug 05
0
Re: More parallelism in VDDK driver (was: Re: CFME-5.11.7.3 Perf. Tests)
On Wed, Aug 5, 2020 at 2:58 PM Richard W.M. Jones <rjones@redhat.com> wrote:
>
> On Wed, Aug 05, 2020 at 02:39:44PM +0300, Nir Soffer wrote:
> > Can we use something like the file plugin? thread pool of workers,
> > each keeping open vddk handle, and serving requests in parallel from
> > the same nbd socket?
>
> Yes, but this isn't implemented in the
2020 Aug 05
0
Re: More parallelism in VDDK driver (was: Re: CFME-5.11.7.3 Perf. Tests)
On Wed, Aug 5, 2020 at 2:15 PM Richard W.M. Jones <rjones@redhat.com> wrote:
>
> [NB: Adding PUBLIC mailing list because this is upstream discussion]
>
> On Mon, Aug 03, 2020 at 06:27:04PM +0100, Richard W.M. Jones wrote:
> > On Mon, Aug 03, 2020 at 06:03:23PM +0300, Nir Soffer wrote:
> > > On Mon, Aug 3, 2020 at 5:47 PM Richard W.M. Jones <rjones@redhat.com>
2020 Aug 05
0
Re: More parallelism in VDDK driver (was: Re: CFME-5.11.7.3 Perf. Tests)
On Wed, Aug 5, 2020 at 5:10 PM Richard W.M. Jones <rjones@redhat.com> wrote:
>
> On Wed, Aug 05, 2020 at 04:49:04PM +0300, Nir Soffer wrote:
> > I see, can change the python plugin to support multiple connections to imageio
> > using SERIALIZE_REQUESTS?
> >
> > The GiL should not limit us since the GIL is released when you write to
> > imageio socket, and
2020 Aug 05
1
Re: More parallelism in VDDK driver (was: Re: CFME-5.11.7.3 Perf. Tests)
On Wed, Aug 5, 2020 at 3:47 PM Richard W.M. Jones <rjones@redhat.com> wrote:
>
>
> Here are some results anyway. The command I'm using is:
>
> $ ./nbdkit -r -U - vddk \
> libdir=/path/to/vmware-vix-disklib-distrib \
> user=root password='***' \
> server='***' thumbprint=aa:bb:cc:... \
> vm=moref=3 \
>
2020 Aug 05
0
Re: More parallelism in VDDK driver (was: Re: CFME-5.11.7.3 Perf. Tests)
On Wed, Aug 5, 2020 at 4:28 PM Richard W.M. Jones <rjones@redhat.com> wrote:
>
> On Wed, Aug 05, 2020 at 03:40:43PM +0300, Nir Soffer wrote:
> > On Wed, Aug 5, 2020 at 2:58 PM Richard W.M. Jones <rjones@redhat.com> wrote:
> > >
> > > On Wed, Aug 05, 2020 at 02:39:44PM +0300, Nir Soffer wrote:
> > > > Can we use something like the file plugin?
2020 Aug 05
2
Re: More parallelism in VDDK driver (was: Re: CFME-5.11.7.3 Perf. Tests)
On Wed, Aug 05, 2020 at 02:39:44PM +0300, Nir Soffer wrote:
> Can we use something like the file plugin? thread pool of workers,
> each keeping open vddk handle, and serving requests in parallel from
> the same nbd socket?
Yes, but this isn't implemented in the plugins, it's implemented in
the server. The server always uses a thread pool, but plugins can opt
for more or less
2020 Aug 05
3
More parallelism in VDDK driver (was: Re: CFME-5.11.7.3 Perf. Tests)
[NB: Adding PUBLIC mailing list because this is upstream discussion]
On Mon, Aug 03, 2020 at 06:27:04PM +0100, Richard W.M. Jones wrote:
> On Mon, Aug 03, 2020 at 06:03:23PM +0300, Nir Soffer wrote:
> > On Mon, Aug 3, 2020 at 5:47 PM Richard W.M. Jones <rjones@redhat.com> wrote:
> > All this make sense, but when we upload 10 disks we have 10 connections
> > but still we
2020 Aug 05
2
Re: More parallelism in VDDK driver (was: Re: CFME-5.11.7.3 Perf. Tests)
On Wed, Aug 05, 2020 at 03:40:43PM +0300, Nir Soffer wrote:
> On Wed, Aug 5, 2020 at 2:58 PM Richard W.M. Jones <rjones@redhat.com> wrote:
> >
> > On Wed, Aug 05, 2020 at 02:39:44PM +0300, Nir Soffer wrote:
> > > Can we use something like the file plugin? thread pool of workers,
> > > each keeping open vddk handle, and serving requests in parallel from
>
2020 Aug 05
5
Re: More parallelism in VDDK driver (was: Re: CFME-5.11.7.3 Perf. Tests)
Nir, BTW what are you using for performance testing?
As far as I can tell it's not possible to make qemu-img convert use
multi-conn when connecting to the source (which is going to be a
problem if we want to use this stuff in virt-v2v).
Instead I've hacked up a copy of this program from libnbd:
https://github.com/libguestfs/libnbd/blob/master/examples/threaded-reads-and-writes.c
so
2018 Jun 25
0
Re: v2v: -o rhv-upload: Use Unix domain socket to access imageio (RHBZ#1588088).
...hard W.M. Jones <rjones@redhat.com>
wrote:
> These two patches add support for using a Unix domain socket to
> directly access imageio in the case where imageio is running on the
> conversion host (usually that means virt-v2v is running on the RHV
> node and something else -- eg. CFME scripts -- arranges that the RHV
> node is the same one running imageio).
>
Actually CFME does not know anything about this optimization. It is virt-v2v
starting the transfer on the same host it is running on.
>
> Conversions in the normal case are not affected - they happen over TCP...
2018 Jun 21
6
v2v: -o rhv-upload: Use Unix domain socket to access imageio (RHBZ#1588088).
These two patches add support for using a Unix domain socket to
directly access imageio in the case where imageio is running on the
conversion host (usually that means virt-v2v is running on the RHV
node and something else -- eg. CFME scripts -- arranges that the RHV
node is the same one running imageio).
Conversions in the normal case are not affected - they happen over TCP
as usual.
This was extremely hard to test, but I did eventually manage to test
it both ways. The log from the Unix domain socket case is here:
https:/...
2020 Aug 05
5
Re: More parallelism in VDDK driver (was: Re: CFME-5.11.7.3 Perf. Tests)
On Wed, Aug 05, 2020 at 04:49:04PM +0300, Nir Soffer wrote:
> I see, can change the python plugin to support multiple connections to imageio
> using SERIALIZE_REQUESTS?
>
> The GiL should not limit us since the GIL is released when you write to
> imageio socket, and this is likely where the plugin spends most of the time.
It's an interesting question and one I'd not really
2018 Feb 27
2
Re: [PATCH] v2v: remove MAC address related information
...ss -- it's
> > at best unclear -- it is thought that VMware might reuse MAC addresses
> > which have "left" the hypervisor, although no one knows if that's
> > really true or not.
> >
> > I don't have much opinion on this. Maybe it's best for CFME to
> > continue to run virt-sysprep as a separate step.
> >
> > Rich.
> >
> > --
> > Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~
> > rjones
> > Read my programming and virtualization blog: http://rwmj.wordpress.com
> >...
2018 Feb 27
4
Re: [PATCH] v2v: remove MAC address related information
...ther or not converted guests need a new MAC address -- it's
at best unclear -- it is thought that VMware might reuse MAC addresses
which have "left" the hypervisor, although no one knows if that's
really true or not.
I don't have much opinion on this. Maybe it's best for CFME to
continue to run virt-sysprep as a separate step.
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
libguestfs lets you edit virtual machines. Supports shell scripting,
bindings from ma...
2018 Feb 27
0
Re: [PATCH] v2v: remove MAC address related information
...ests need a new MAC address -- it's
> at best unclear -- it is thought that VMware might reuse MAC addresses
> which have "left" the hypervisor, although no one knows if that's
> really true or not.
>
> I don't have much opinion on this. Maybe it's best for CFME to
> continue to run virt-sysprep as a separate step.
>
> Rich.
>
> --
> Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~
> rjones
> Read my programming and virtualization blog: http://rwmj.wordpress.com
> libguestfs lets you edit virtual machines....
2018 Feb 27
0
Re: [PATCH] v2v: remove MAC address related information
...> at best unclear -- it is thought that VMware might reuse MAC addresses
> > > which have "left" the hypervisor, although no one knows if that's
> > > really true or not.
> > >
> > > I don't have much opinion on this. Maybe it's best for CFME to
> > > continue to run virt-sysprep as a separate step.
> > >
> > > Rich.
> > >
> > > --
> > > Richard Jones, Virtualization Group, Red Hat
> http://people.redhat.com/~
> > > rjones
> > > Read my programming and virtualiza...
2018 Feb 27
5
[PATCH] v2v: remove MAC address related information
Remove ties to MAC address because it is likely to change.
The code is based on operations net-hwaddr and udev-persistent-net of
virt-sysprep.
Signed-off-by: Tomáš Golembiovský <tgolembi@redhat.com>
---
v2v/convert_linux.ml | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/v2v/convert_linux.ml b/v2v/convert_linux.ml
index b273785e6..8bba74786 100644
---
2018 Aug 30
3
[PATCH v2 0/2] v2v: Add -o openstack target.
v1 was here:
https://www.redhat.com/archives/libguestfs/2018-August/thread.html#00287
v2:
- The -oa option now gives an error; apparently Cinder cannot
generally control sparse/preallocated behaviour, although certain
Cinder backends can.
- The -os option maps to Cinder volume type; suggested by Matt Booth.
- Add a simple test.