Thanks all the responses.
It seems that I need to furtherly describe our question. If our VM is mounted on
gluster via NFS(V3), writing big data on VM reach the full bandwidth. However,
if VM is mounted by gluster client, it has half bandwidth when writing data on
VM (mounted on gluster).
Considering we don't want VM users see our gluster file system, mount
gluster file on VM is not allowed.
By the way, Where can I get the 3.4a version of Gluster?
Regards,
zhxue
From: gluster-users-request
Date: 2013-01-14 20:00
To: gluster-users
Subject: Gluster-users Digest, Vol 57, Issue 31
Send Gluster-users mailing list submissions to
gluster-users at gluster.org
To subscribe or unsubscribe via the World Wide Web, visit
http://supercolony.gluster.org/mailman/listinfo/gluster-users
or, via email, send a message with subject or body 'help' to
gluster-users-request at gluster.org
You can reach the person managing the list at
gluster-users-owner at gluster.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Gluster-users digest..."
Today's Topics:
1. IO performance cut down when VM on Gluster (glusterzhxue)
2. Re: IO performance cut down when VM on Gluster (Joe Julian)
3. Re: IO performance cut down when VM on Gluster
(Stephan von Krawczynski)
4. Re: IO performance cut down when VM on Gluster (Bharata B Rao)
5. dm-glusterfs (was Re: IO performance cut down when VM on
Gluster) (Jeff Darcy)
----------------------------------------------------------------------
Message: 1
Date: Sun, 13 Jan 2013 20:14:36 +0800
From: glusterzhxue <glusterzhxue at 163.com>
To: gluster-users <gluster-users at gluster.org>
Subject: [Gluster-users] IO performance cut down when VM on Gluster
Message-ID: <2013011320143501335810 at 163.com>
Content-Type: text/plain; charset="gb2312"
Hi all,
We placed Virtual Machine Imame(based on kvm) on gluster file system, but IO
performance of the VM is only half of the bandwidth.
If we mount it on a physical machine using the same volume as the above VM,
physical host reaches full bandwidth. We performed it many times, but each had
the same result.
Anybody could help us?
Thanks
zhxue
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130113/78161071/attachment-0001.html>
------------------------------
Message: 2
Date: Sun, 13 Jan 2013 07:11:14 -0800
From: Joe Julian <joe at julianfamily.org>
To: gluster-users at gluster.org
Subject: Re: [Gluster-users] IO performance cut down when VM on
Gluster
Message-ID: <50F2CE92.6060703 at julianfamily.org>
Content-Type: text/plain; charset="iso-8859-1";
Format="flowed"
On 01/13/2013 04:14 AM, glusterzhxue wrote:> Hi all,
> We placed Virtual Machine Imame(based on kvm) on gluster file system,
> but IO performance of the VM is only half of the bandwidth.
> If we mount it on a physical machine using the same volume as the
> above VM, physical host reaches full bandwidth. We performed it many
> times, but each had the same result.
What you're seeing is the difference between bandwidth and latency. When
you're writing a big file to a VM filesystem, you're not performing the
same operations as writing a file to a GlusterFS mount thus you're able
to measure bandwidth. The filesystem within the VM is doing things like
journaling, inode operations, etc. that you don't have to do when
writing to the client requiring a lot more I/O operations per second,
thus amplifying the latency present in both your network and the context
switching through FUSE.
You have two options:
1. Mount the GlusterFS volume from within the VM and host the data
you're operating on there. This avoids all the additional overhead of
managing a filesystem on top of FUSE.
2. Try the 3.4 qa release and native GlusterFS support in the latest
qemu-kvm.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130113/7cc6641e/attachment-0001.html>
------------------------------
Message: 3
Date: Sun, 13 Jan 2013 23:55:01 +0100
From: Stephan von Krawczynski <skraw at ithnet.com>
To: Joe Julian <joe at julianfamily.org>
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] IO performance cut down when VM on
Gluster
Message-ID: <20130113235501.c0a2eb24.skraw at ithnet.com>
Content-Type: text/plain; charset=US-ASCII
On Sun, 13 Jan 2013 07:11:14 -0800
Joe Julian <joe at julianfamily.org> wrote:
> On 01/13/2013 04:14 AM, glusterzhxue wrote:
> > Hi all,
> > We placed Virtual Machine Imame(based on kvm) on gluster file system,
> > but IO performance of the VM is only half of the bandwidth.
> > If we mount it on a physical machine using the same volume as the
> > above VM, physical host reaches full bandwidth. We performed it many
> > times, but each had the same result.
> What you're seeing is the difference between bandwidth and latency.
When
> you're writing a big file to a VM filesystem, you're not performing
the
> same operations as writing a file to a GlusterFS mount thus you're able
> to measure bandwidth. The filesystem within the VM is doing things like
> journaling, inode operations, etc. that you don't have to do when
> writing to the client requiring a lot more I/O operations per second,
> thus amplifying the latency present in both your network and the context
> switching through FUSE.
>
> You have two options:
> 1. Mount the GlusterFS volume from within the VM and host the data
> you're operating on there. This avoids all the additional overhead of
> managing a filesystem on top of FUSE.
> 2. Try the 3.4 qa release and native GlusterFS support in the latest
> qemu-kvm.
Thank you for telling the people openly that FUSE is a performance problem
which could be solved by a kernel-based glusterfs.
Do you want to make drivers for every application like qemu? How many burnt
manpower will it take until the real solution is accepted?
It is no solution to mess around _inside_ the VM for most people, you simply
don't want _customers_ on your VM with a glusterfs mount. You want them to
see
a local fs only.
--
Regards,
Stephan
------------------------------
Message: 4
Date: Mon, 14 Jan 2013 09:55:53 +0530
From: Bharata B Rao <bharata.rao at gmail.com>
To: Stephan von Krawczynski <skraw at ithnet.com>
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] IO performance cut down when VM on
Gluster
Message-ID:
<CAGZKiBr--fYF-Awq0cYXJx1wPB52Odgm_PArE3Dvrt733mfwZw at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
On Mon, Jan 14, 2013 at 4:25 AM, Stephan von Krawczynski
<skraw at ithnet.com> wrote:>
> Thank you for telling the people openly that FUSE is a performance problem
> which could be solved by a kernel-based glusterfs.
>
> Do you want to make drivers for every application like qemu? How many burnt
> manpower will it take until the real solution is accepted?
> It is no solution to mess around _inside_ the VM for most people, you
simply
> don't want _customers_ on your VM with a glusterfs mount. You want them
to see
> a local fs only.
Just wondering if there is a value in doing dm-glusterfs on the lines
similar to dm-nfs
(https://blogs.oracle.com/OTNGarage/entry/simplify_your_storage_management_with).
I understand GlusterFS due to its stackable translator nature and
having to deal with multiple translators at the client end might not
easily fit to this model, but may be something to think about ?
Regards,
Bharata.
--
http://raobharata.wordpress.com/
------------------------------
Message: 5
Date: Mon, 14 Jan 2013 06:53:58 -0500
From: Jeff Darcy <jdarcy at redhat.com>
To: gluster-users at gluster.org
Subject: [Gluster-users] dm-glusterfs (was Re: IO performance cut down
when VM on Gluster)
Message-ID: <50F3F1D6.10405 at redhat.com>
Content-Type: text/plain; charset=ISO-8859-1
On 1/13/13 11:25 PM, Bharata B Rao wrote:> Just wondering if there is a value in doing dm-glusterfs on the lines
> similar to dm-nfs
>
(https://blogs.oracle.com/OTNGarage/entry/simplify_your_storage_management_with).
>
> I understand GlusterFS due to its stackable translator nature and
> having to deal with multiple translators at the client end might not
> easily fit to this model, but may be something to think about ?
It's an interesting idea. You're also right that there are some issues
with
the stackable translator model and so on. Porting all of that code into the
kernel would require an almost suicidal suspension of all other development
activity while competitors continue to catch up on manageability or add other
features, so that's not every appealing. Keeping it all out in user space
with
a minimal kernel-interception layer would give us something better than FUSE (I
did something like this in a previous life BTW), but probably not enough better
to be compelling. A hybrid "fast path, slow path" approach might
work. Keep
all of the code for common-case reads and writes in the kernel, punt everything
else back up to user space with hooks to disable the fast path when necessary
(e.g. during a config change). OTOH, how would this be better than e.g. an
iSCSI target, which is deployable today with essentially the same functionality
and even greater generality (e.g. to non-Linux platforms)?
It's good to think about these things. We could implement ten other
alternative access mechanisms (Apache/nginx modules anyone?) and still burn
fewer resources than we would with "just put it all in the kernel"
inanity. I
tried one of our much-touted alternatives recently and, despite having a kernel
client, they achieved less than 1/3 of our performance on this kind of
workload. If we want to eliminate sources of overhead we need to address more
than just that one.
------------------------------
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
End of Gluster-users Digest, Vol 57, Issue 31
*********************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130114/5dfa315d/attachment.html>