Christopher Schmidt
2017-Aug-10 17:09 UTC
[Gluster-users] Kubernetes v1.7.3 and GlusterFS Plugin
Ok, thanks. Humble Devassy Chirammal <humble.devassy at gmail.com> schrieb am Do., 10. Aug. 2017, 19:04:> On Thu, Aug 10, 2017 at 10:25 PM, Christopher Schmidt <fakod666 at gmail.com> > wrote: > >> Just created the container from here: >> https://github.com/gluster/gluster-containers/tree/master/CentOS >> >> And used stock Kubernetes 1.7.3, hence the included volume plugin and >> Heketi version 4. >> >> ? > Regardless of the glusterfs client version this is supposed to work. One > patch has gone in 1.7.3 tree which could have broken it. > Checking on the same and will get back as soon as I have an update. > ? > > >> Humble Devassy Chirammal <humble.devassy at gmail.com> schrieb am Do., 10. >> Aug. 2017, 18:49: >> >>> ?Thanks .. Its the same option. Can you let me know your glusterfs >>> client package version ?? >>> >>> >>> >>> On Thu, Aug 10, 2017 at 8:34 PM, Christopher Schmidt <fakod666 at gmail.com >>> > wrote: >>> >>>> >>>> short copy from a kubectl describe pod... >>>> >>>> Events: >>>> FirstSeen LastSeen Count From SubObjectPath Type Reason Message >>>> --------- -------- ----- ---- ------------- -------- ------ ------- >>>> 5h 54s 173 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning >>>> FailedMount (combined from similar events): MountVolume.SetUp failed >>>> for volume "pvc-fa4b2621-7dad-11e7-8a44-062df200059f" : glusterfs: mount >>>> failed: mount failed: exit status 1 >>>> Mounting command: mount >>>> Mounting arguments: 159.100.242.235:vol_7a312f660490387c94cfaf84bce81bca >>>> /var/lib/kubelet/pods/fa4c6540-7dad-11e7-8a44-062df200059f/volumes/ >>>> kubernetes.io~glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f >>>> glusterfs [auto_unmount log-level=ERROR log-file=/var/lib/kubelet/plugins/ >>>> kubernetes.io/glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f/es-data-log-distributed-0-glusterfs.log >>>> backup-volfile-servers=159.100.240.237:159.100.242.156:159.100.242.235] >>>> Output: /usr/bin/fusermount-glusterfs: mount failed: Invalid argument >>>> Mount failed. Please check the log file for more details. >>>> >>>> the following error information was pulled from the glusterfs log to >>>> help diagnose this issue: >>>> [2017-08-10 15:02:12.260168] W [glusterfsd.c:1095:cleanup_and_exit] >>>> (-->/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f3e2477c62d] >>>> (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x8064) [0x7f3e24e43064] >>>> (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x4073bd]))) 0-: >>>> received signum (15), shutting down >>>> [2017-08-10 15:02:12.260189] I [fuse-bridge.c:5475:fini] 0-fuse: >>>> Unmounting >>>> '/var/lib/kubelet/pods/fa4c6540-7dad-11e7-8a44-062df200059f/volumes/ >>>> kubernetes.io~glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f'. >>>> >>>> 5h 32s 151 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning >>>> FailedMount Unable to mount volumes for pod >>>> "es-data-log-distributed-0_monitoring(fa4c6540-7dad-11e7-8a44-062df200059f)": >>>> timeout expired waiting for volumes to attach/mount for pod >>>> "monitoring"/"es-data-log-distributed-0". list of unattached/unmounted >>>> volumes=[es-data] >>>> 5h 32s 163 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning >>>> FailedSync Error syncing pod >>>> >>>> Humble Devassy Chirammal <humble.devassy at gmail.com> schrieb am Do., >>>> 10. Aug. 2017 um 16:55 Uhr: >>>> >>>>> Are you seeing issue or error message which says auto_unmount option >>>>> is not valid ? >>>>> Can you please let me the issue you are seeing with 1.7.3 ? >>>>> >>>>> --Humble >>>>> >>>>> >>>>> On Thu, Aug 10, 2017 at 3:33 PM, Christopher Schmidt < >>>>> fakod666 at gmail.com> wrote: >>>>> >>>>>> Hi all, >>>>>> >>>>>> I am testing K8s 1.7.3 together with GlusterFS and have some issues >>>>>> >>>>>> is this correct? >>>>>> - Kubernetes v1.7.3 ships with a GlusterFS Plugin that depends on >>>>>> GlusterFS 3.11 >>>>>> - GlusterFS 3.11 is not recommended for production. So 3.10 should be >>>>>> used >>>>>> >>>>>> This actually means no K8s 1.7.x version, right? >>>>>> Or is there anything else I could do? >>>>>> >>>>>> best Christopher >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Gluster-users mailing list >>>>>> Gluster-users at gluster.org >>>>>> http://lists.gluster.org/mailman/listinfo/gluster-users >>>>>> >>>>> >>>>> >>>-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170810/61605f14/attachment.html>
Humble Devassy Chirammal
2017-Aug-10 17:16 UTC
[Gluster-users] Kubernetes v1.7.3 and GlusterFS Plugin
As an another solution, if you are updating the system where you run application container to latest glusterfs ( 3.11) , this will be fixed as well as it support this mount option. --Humble On Thu, Aug 10, 2017 at 10:39 PM, Christopher Schmidt <fakod666 at gmail.com> wrote:> Ok, thanks. > > Humble Devassy Chirammal <humble.devassy at gmail.com> schrieb am Do., 10. > Aug. 2017, 19:04: > >> On Thu, Aug 10, 2017 at 10:25 PM, Christopher Schmidt <fakod666 at gmail.com >> > wrote: >> >>> Just created the container from here: https://github.com/gluster/ >>> gluster-containers/tree/master/CentOS >>> >>> And used stock Kubernetes 1.7.3, hence the included volume plugin and >>> Heketi version 4. >>> >>> ? >> Regardless of the glusterfs client version this is supposed to work. One >> patch has gone in 1.7.3 tree which could have broken it. >> Checking on the same and will get back as soon as I have an update. >> ? >> >> >>> Humble Devassy Chirammal <humble.devassy at gmail.com> schrieb am Do., 10. >>> Aug. 2017, 18:49: >>> >>>> ?Thanks .. Its the same option. Can you let me know your glusterfs >>>> client package version ?? >>>> >>>> >>>> >>>> On Thu, Aug 10, 2017 at 8:34 PM, Christopher Schmidt < >>>> fakod666 at gmail.com> wrote: >>>> >>>>> >>>>> short copy from a kubectl describe pod... >>>>> >>>>> Events: >>>>> FirstSeen LastSeen Count From SubObjectPath Type Reason Message >>>>> --------- -------- ----- ---- ------------- -------- ------ ------- >>>>> 5h 54s 173 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning >>>>> FailedMount (combined from similar events): MountVolume.SetUp failed >>>>> for volume "pvc-fa4b2621-7dad-11e7-8a44-062df200059f" : glusterfs: >>>>> mount failed: mount failed: exit status 1 >>>>> Mounting command: mount >>>>> Mounting arguments: 159.100.242.235:vol_7a312f660490387c94cfaf84bce81bca >>>>> /var/lib/kubelet/pods/fa4c6540-7dad-11e7-8a44-062df200059f/volumes/ >>>>> kubernetes.io~glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f >>>>> glusterfs [auto_unmount log-level=ERROR log-file=/var/lib/kubelet/ >>>>> plugins/kubernetes.io/glusterfs/pvc-fa4b2621-7dad- >>>>> 11e7-8a44-062df200059f/es-data-log-distributed-0-glusterfs.log >>>>> backup-volfile-servers=159.100.240.237:159.100.242.156: >>>>> 159.100.242.235] >>>>> Output: /usr/bin/fusermount-glusterfs: mount failed: Invalid argument >>>>> Mount failed. Please check the log file for more details. >>>>> >>>>> the following error information was pulled from the glusterfs log to >>>>> help diagnose this issue: >>>>> [2017-08-10 15:02:12.260168] W [glusterfsd.c:1095:cleanup_and_exit] >>>>> (-->/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f3e2477c62d] >>>>> (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x8064) [0x7f3e24e43064] >>>>> (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x4073bd]))) 0-: >>>>> received signum (15), shutting down >>>>> [2017-08-10 15:02:12.260189] I [fuse-bridge.c:5475:fini] 0-fuse: >>>>> Unmounting '/var/lib/kubelet/pods/fa4c6540-7dad-11e7-8a44- >>>>> 062df200059f/volumes/kubernetes.io~glusterfs/pvc-fa4b2621- >>>>> 7dad-11e7-8a44-062df200059f'. >>>>> >>>>> 5h 32s 151 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning >>>>> FailedMount Unable to mount volumes for pod >>>>> "es-data-log-distributed-0_monitoring(fa4c6540-7dad-11e7-8a44-062df200059f)": >>>>> timeout expired waiting for volumes to attach/mount for pod >>>>> "monitoring"/"es-data-log-distributed-0". list of >>>>> unattached/unmounted volumes=[es-data] >>>>> 5h 32s 163 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning >>>>> FailedSync Error syncing pod >>>>> >>>>> Humble Devassy Chirammal <humble.devassy at gmail.com> schrieb am Do., >>>>> 10. Aug. 2017 um 16:55 Uhr: >>>>> >>>>>> Are you seeing issue or error message which says auto_unmount option >>>>>> is not valid ? >>>>>> Can you please let me the issue you are seeing with 1.7.3 ? >>>>>> >>>>>> --Humble >>>>>> >>>>>> >>>>>> On Thu, Aug 10, 2017 at 3:33 PM, Christopher Schmidt < >>>>>> fakod666 at gmail.com> wrote: >>>>>> >>>>>>> Hi all, >>>>>>> >>>>>>> I am testing K8s 1.7.3 together with GlusterFS and have some issues >>>>>>> >>>>>>> is this correct? >>>>>>> - Kubernetes v1.7.3 ships with a GlusterFS Plugin that depends on >>>>>>> GlusterFS 3.11 >>>>>>> - GlusterFS 3.11 is not recommended for production. So 3.10 should >>>>>>> be used >>>>>>> >>>>>>> This actually means no K8s 1.7.x version, right? >>>>>>> Or is there anything else I could do? >>>>>>> >>>>>>> best Christopher >>>>>>> >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Gluster-users mailing list >>>>>>> Gluster-users at gluster.org >>>>>>> http://lists.gluster.org/mailman/listinfo/gluster-users >>>>>>> >>>>>> >>>>>> >>>>-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170810/1de5b044/attachment.html>
Christopher Schmidt
2017-Aug-10 18:01 UTC
[Gluster-users] Kubernetes v1.7.3 and GlusterFS Plugin
Yes, I tried to, but I didn't find a 3.11 centos-release-gluster package for CentOS. Humble Devassy Chirammal <humble.devassy at gmail.com> schrieb am Do., 10. Aug. 2017, 19:17:> As an another solution, if you are updating the system where you run > application container to latest glusterfs ( 3.11) , this will be fixed as > well as it support this mount option. > > --Humble > > > On Thu, Aug 10, 2017 at 10:39 PM, Christopher Schmidt <fakod666 at gmail.com> > wrote: > >> Ok, thanks. >> >> Humble Devassy Chirammal <humble.devassy at gmail.com> schrieb am Do., 10. >> Aug. 2017, 19:04: >> >>> On Thu, Aug 10, 2017 at 10:25 PM, Christopher Schmidt < >>> fakod666 at gmail.com> wrote: >>> >>>> Just created the container from here: >>>> https://github.com/gluster/gluster-containers/tree/master/CentOS >>>> >>>> And used stock Kubernetes 1.7.3, hence the included volume plugin and >>>> Heketi version 4. >>>> >>>> ? >>> Regardless of the glusterfs client version this is supposed to work. One >>> patch has gone in 1.7.3 tree which could have broken it. >>> Checking on the same and will get back as soon as I have an update. >>> ? >>> >>> >>>> Humble Devassy Chirammal <humble.devassy at gmail.com> schrieb am Do., >>>> 10. Aug. 2017, 18:49: >>>> >>>>> ?Thanks .. Its the same option. Can you let me know your glusterfs >>>>> client package version ?? >>>>> >>>>> >>>>> >>>>> On Thu, Aug 10, 2017 at 8:34 PM, Christopher Schmidt < >>>>> fakod666 at gmail.com> wrote: >>>>> >>>>>> >>>>>> short copy from a kubectl describe pod... >>>>>> >>>>>> Events: >>>>>> FirstSeen LastSeen Count From SubObjectPath Type Reason Message >>>>>> --------- -------- ----- ---- ------------- -------- ------ ------- >>>>>> 5h 54s 173 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning >>>>>> FailedMount (combined from similar events): MountVolume.SetUp failed >>>>>> for volume "pvc-fa4b2621-7dad-11e7-8a44-062df200059f" : glusterfs: mount >>>>>> failed: mount failed: exit status 1 >>>>>> Mounting command: mount >>>>>> Mounting arguments: 159.100.242.235:vol_7a312f660490387c94cfaf84bce81bca >>>>>> /var/lib/kubelet/pods/fa4c6540-7dad-11e7-8a44-062df200059f/volumes/ >>>>>> kubernetes.io~glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f >>>>>> glusterfs [auto_unmount log-level=ERROR log-file=/var/lib/kubelet/plugins/ >>>>>> kubernetes.io/glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f/es-data-log-distributed-0-glusterfs.log >>>>>> backup-volfile-servers=159.100.240.237:159.100.242.156 >>>>>> :159.100.242.235] >>>>>> Output: /usr/bin/fusermount-glusterfs: mount failed: Invalid argument >>>>>> Mount failed. Please check the log file for more details. >>>>>> >>>>>> the following error information was pulled from the glusterfs log to >>>>>> help diagnose this issue: >>>>>> [2017-08-10 15:02:12.260168] W [glusterfsd.c:1095:cleanup_and_exit] >>>>>> (-->/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f3e2477c62d] >>>>>> (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x8064) [0x7f3e24e43064] >>>>>> (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x4073bd]))) 0-: >>>>>> received signum (15), shutting down >>>>>> [2017-08-10 15:02:12.260189] I [fuse-bridge.c:5475:fini] 0-fuse: >>>>>> Unmounting >>>>>> '/var/lib/kubelet/pods/fa4c6540-7dad-11e7-8a44-062df200059f/volumes/ >>>>>> kubernetes.io~glusterfs/pvc-fa4b2621-7dad-11e7-8a44-062df200059f'. >>>>>> >>>>>> 5h 32s 151 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning >>>>>> FailedMount Unable to mount volumes for pod >>>>>> "es-data-log-distributed-0_monitoring(fa4c6540-7dad-11e7-8a44-062df200059f)": >>>>>> timeout expired waiting for volumes to attach/mount for pod >>>>>> "monitoring"/"es-data-log-distributed-0". list of unattached/unmounted >>>>>> volumes=[es-data] >>>>>> 5h 32s 163 kubelet, k8s-bootcamp-rbac-np-worker-6263f70 Warning >>>>>> FailedSync Error syncing pod >>>>>> >>>>>> Humble Devassy Chirammal <humble.devassy at gmail.com> schrieb am Do., >>>>>> 10. Aug. 2017 um 16:55 Uhr: >>>>>> >>>>>>> Are you seeing issue or error message which says auto_unmount option >>>>>>> is not valid ? >>>>>>> Can you please let me the issue you are seeing with 1.7.3 ? >>>>>>> >>>>>>> --Humble >>>>>>> >>>>>>> >>>>>>> On Thu, Aug 10, 2017 at 3:33 PM, Christopher Schmidt < >>>>>>> fakod666 at gmail.com> wrote: >>>>>>> >>>>>>>> Hi all, >>>>>>>> >>>>>>>> I am testing K8s 1.7.3 together with GlusterFS and have some issues >>>>>>>> >>>>>>>> is this correct? >>>>>>>> - Kubernetes v1.7.3 ships with a GlusterFS Plugin that depends on >>>>>>>> GlusterFS 3.11 >>>>>>>> - GlusterFS 3.11 is not recommended for production. So 3.10 should >>>>>>>> be used >>>>>>>> >>>>>>>> This actually means no K8s 1.7.x version, right? >>>>>>>> Or is there anything else I could do? >>>>>>>> >>>>>>>> best Christopher >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Gluster-users mailing list >>>>>>>> Gluster-users at gluster.org >>>>>>>> http://lists.gluster.org/mailman/listinfo/gluster-users >>>>>>>> >>>>>>> >>>>>>> >>>>> >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170810/fee9f39a/attachment.html>