likun
2016-Dec-30 09:07 UTC
[Gluster-users] how to use a small part of glusterfs volume in kubernetes
Anyone use glusterfs in kubernetes environment? We use coreos. As you know, from 1.4.3, coreos version kubernetes has included glusterfs-client debian package in hypercube image. So recently we moved our kubernetes to 1.5.1, and began to mount glusterfs from pod directly. But we can just mount the whole glusterfs volume, is there any way to use a small part of glusterfs volume?like a directory in the volume, and 10G limited through quota. Can PV and PVC do this ? Sincerely. Likun -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161230/2d4e8a64/attachment.html>
Vijay Bellur
2016-Dec-30 23:33 UTC
[Gluster-users] how to use a small part of glusterfs volume in kubernetes
On Fri, Dec 30, 2016 at 4:07 AM, likun <kun.li at ucarinc.com> wrote:> Anyone use glusterfs in kubernetes environment? > > > > We use coreos. As you know, from 1.4.3, coreos version kubernetes has > included glusterfs-client debian package in hypercube image. So recently we > moved our kubernetes to 1.5.1, and began to mount glusterfs from pod > directly. > > > > But we can just mount the whole glusterfs volume, is there any way to use > a small part of glusterfs volume?like a directory in the volume, and 10G > limited through quota. Can PV and PVC do this ? > > >You can possibly accomplish this by mounting the entire glusterfs volume into the container host and bind mounting different sub-directories of the volume in different containers. Gluster supports configuring quota on sub-directories. Note that data services like geo-replication, snapshots etc. cannot be configured for sub-directories. Are your PVs for read write once or read write many workloads? We are looking at adding read write once support with iSCSI in Gluster 3.10. Regards, Vijay -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161230/44bdcc6c/attachment.html>
likun
2017-Jan-01 08:30 UTC
[Gluster-users] 答复: how to use a small part of glusterfs volume in kubernetes
Since we use coreos, mount directly into host can?t be done. We used a very complex procedure to handle this before, first mounted into a glusterfs container, then shared to os, then other container mounted it. But I thought It?s too complicated, and abandoned it. As to PV and PVC, I just did some testing. I seted up PVC with glusterfs, limited the capacity to 8GB, but when I mounted the corresponding PV from pod, I had the entire 1.8TB volume. It was not what I expected. Likun ???: Vijay Bellur [mailto:vbellur at redhat.com] ????: 2016?12?31? 7:34 ???: likun ??: gluster-users ??: Re: [Gluster-users] how to use a small part of glusterfs volume in kubernetes On Fri, Dec 30, 2016 at 4:07 AM, likun <kun.li at ucarinc.com <mailto:kun.li at ucarinc.com> > wrote: Anyone use glusterfs in kubernetes environment? We use coreos. As you know, from 1.4.3, coreos version kubernetes has included glusterfs-client debian package in hypercube image. So recently we moved our kubernetes to 1.5.1, and began to mount glusterfs from pod directly. But we can just mount the whole glusterfs volume, is there any way to use a small part of glusterfs volume?like a directory in the volume, and 10G limited through quota. Can PV and PVC do this ? You can possibly accomplish this by mounting the entire glusterfs volume into the container host and bind mounting different sub-directories of the volume in different containers. Gluster supports configuring quota on sub-directories. Note that data services like geo-replication, snapshots etc. cannot be configured for sub-directories. Are your PVs for read write once or read write many workloads? We are looking at adding read write once support with iSCSI in Gluster 3.10. Regards, Vijay -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20170101/58142acc/attachment.html>