Ivano Talamo
2014-Jun-11 17:55 UTC
[Gluster-users] Gluster 3.5 problems with libgfapi/qemu
Hello, I recently update 2 servers (Scientific Linux 6) with a replicate volume from gluster 3.4 to 3.5.0-2. The volume was previously used to host qemu/kvm VM images accessed via a fuse-mounted mount-point. Now I would like to use the libgfapi but I'm seeing this error: [root at cmsrm-service02 ~]# qemu-img info gluster://cmsrm-service02/vol1/vms/disks/cmsrm-ui01.raw2 [2014-06-11 17:47:22.084842] E [afr-common.c:3959:afr_notify] 0-vol1-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up. image: gluster://cmsrm-service03/vol1/vms/disks/cmsrm-ui01.raw2 file format: raw virtual size: 20G (21474836480 bytes) disk size: 4.7G [2014-06-11 17:47:22.318034] E [afr-common.c:3959:afr_notify] 0-vol1-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up. The error message does not appear if I access the file via the mount-point. The volume seems fine: [root at cmsrm-service02 ~]# gluster volume info Volume Name: vol1 Type: Replicate Volume ID: 35de92de-d6b3-4784-9ccb-65518e014a49 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: cmsrm-service02:/brick/vol1 Brick2: cmsrm-service03:/brick/vol1 Options Reconfigured: server.allow-insecure: on [root at cmsrm-service02 ~]# gluster volume status Status of volume: vol1 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick cmsrm-service02:/brick/vol1 49152 Y 16904 Brick cmsrm-service03:/brick/vol1 49152 Y 12868 NFS Server on localhost 2049 Y 4263 Self-heal Daemon on localhost N/A Y 4283 NFS Server on 141.108.36.8 2049 Y 13679 Self-heal Daemon on 141.108.36.8 N/A Y 13691 Task Status of Volume vol1 ------------------------------------------------------------------------------ There are no active volume tasks Thank you, Ivano -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1877 bytes Desc: S/MIME Cryptographic Signature URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140611/a2dc58dc/attachment.p7s>
Ivano, Did you try to start the vm on the replicated volume? Does it work? I remember a vm on a replicated volume failed to start (from virsh) in 3.5.0 due to similar errors, but now I installed 3.5.1beta2 and it starts up successfully even though qemu-img still shows the same error, which I can ignore now. Beta2 download: http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.1beta2/ Jae On 6/11/14 10:55 AM, "Ivano Talamo" <Ivano.Talamo at roma1.infn.it> wrote:>Hello, >I recently update 2 servers (Scientific Linux 6) with a replicate volume >from gluster 3.4 to 3.5.0-2. >The volume was previously used to host qemu/kvm VM images accessed via a >fuse-mounted mount-point. >Now I would like to use the libgfapi but I'm seeing this error: > >[root at cmsrm-service02 ~]# qemu-img info >gluster://cmsrm-service02/vol1/vms/disks/cmsrm-ui01.raw2 >[2014-06-11 17:47:22.084842] E [afr-common.c:3959:afr_notify] >0-vol1-replicate-0: All subvolumes are down. Going offline until atleast >one of them comes back up. >image: gluster://cmsrm-service03/vol1/vms/disks/cmsrm-ui01.raw2 >file format: raw >virtual size: 20G (21474836480 bytes) >disk size: 4.7G >[2014-06-11 17:47:22.318034] E [afr-common.c:3959:afr_notify] >0-vol1-replicate-0: All subvolumes are down. Going offline until atleast >one of them comes back up. > >The error message does not appear if I access the file via the >mount-point. > >The volume seems fine: >[root at cmsrm-service02 ~]# gluster volume info > >Volume Name: vol1 >Type: Replicate >Volume ID: 35de92de-d6b3-4784-9ccb-65518e014a49 >Status: Started >Number of Bricks: 1 x 2 = 2 >Transport-type: tcp >Bricks: >Brick1: cmsrm-service02:/brick/vol1 >Brick2: cmsrm-service03:/brick/vol1 >Options Reconfigured: >server.allow-insecure: on >[root at cmsrm-service02 ~]# gluster volume status >Status of volume: vol1 >Gluster process Port Online Pid >-------------------------------------------------------------------------- >---- >Brick cmsrm-service02:/brick/vol1 49152 Y >16904 >Brick cmsrm-service03:/brick/vol1 49152 Y >12868 >NFS Server on localhost 2049 Y 4263 >Self-heal Daemon on localhost N/A Y 4283 >NFS Server on 141.108.36.8 2049 Y 13679 >Self-heal Daemon on 141.108.36.8 N/A Y 13691 > >Task Status of Volume vol1 >-------------------------------------------------------------------------- >---- >There are no active volume tasks > > > >Thank you, >Ivano > >
Vijay Bellur
2014-Jun-12 13:44 UTC
[Gluster-users] Gluster 3.5 problems with libgfapi/qemu
On 06/11/2014 11:25 PM, Ivano Talamo wrote:> Hello, > I recently update 2 servers (Scientific Linux 6) with a replicate volume > from gluster 3.4 to 3.5.0-2. > The volume was previously used to host qemu/kvm VM images accessed via a > fuse-mounted mount-point. > Now I would like to use the libgfapi but I'm seeing this error: > > [root at cmsrm-service02 ~]# qemu-img info > gluster://cmsrm-service02/vol1/vms/disks/cmsrm-ui01.raw2 > [2014-06-11 17:47:22.084842] E [afr-common.c:3959:afr_notify] > 0-vol1-replicate-0: All subvolumes are down. Going offline until atleast > one of them comes back up. > image: gluster://cmsrm-service03/vol1/vms/disks/cmsrm-ui01.raw2 > file format: raw > virtual size: 20G (21474836480 bytes) > disk size: 4.7G > [2014-06-11 17:47:22.318034] E [afr-common.c:3959:afr_notify] > 0-vol1-replicate-0: All subvolumes are down. Going offline until atleast > one of them comes back up. >This is a benign error message. qemu-img initializes a glusterfs graph through libgfapi, performs the operation and then cleans up the graph. afr translator in glusterfs displays this log message as part of a graph cleanup operation. IIRC, qemu displays all log messages on stderr by default and hence this message is seen.> The error message does not appear if I access the file via the mount-point. >There should be no functional problem even if this message is seen. -Vijay