Sending from right email address:)
tamas
On 02/22/2015 07:18 PM, Tamas Papp wrote:>
> This is rather odd.
> Why are other distributions less supported?
>
> I recall the same story for 3.2 -> 3.3 (or something like that).
>
> I solved it by copying w-vol-fuse.vol to w-vol.vol. Is that enough, or
> does the current vol file miss some information and I really need to
> run that command (and restart the cluster actually)?
>
> Thanks,
> --
> Sent from mobile
>
> On February 21, 2015 3:34:22 AM Kaushal M <kshlmster at gmail.com>
wrote:
>
>> This is a known issue non RPM (Fedora/EL) packages. The DEB packages
>> and packages for other distros don't do a post upgrade regeneration
>> of volfiles. So after the upgrade, GlusterD is searching for the new
>> volfiles which don't exist, and cannot give the clients with a
>> volfile, leading to the mounts failing.
>>
>> You can look at https://bugzilla.redhat.com/show_bug.cgi?id=1191176
>> for more details.
>>
>> tl;dr stop glusterd, run `glusterd --xlator-option *upgrade=on -N` to
>> regenerate the volfiles, start glusterd (on all nodes).
>>
>> ~kaushal
>>
>> On Fri, Feb 20, 2015 at 8:33 PM, Tamas Papp <tamas.papp at
rtfm.co.hu
>> <mailto:tamas.papp at rtfm.co.hu>> wrote:
>>
>> hi All,
>>
>> After I rebooted the a cluster, linux clients are working fine.
>> But nodes cannot mount the cluster.
>>
>>
>> 16:01 gl0(pts/0):/var/log/glusterfs$ gluster volume status
>> Status of volume: w-vol
>> Gluster process Port Online Pid
>>
------------------------------------------------------------------------------
>> Brick gl0:/mnt/brick1/data 49152 Y 1841
>> Brick gl1:/mnt/brick1/data 49152 Y 1368
>> Brick gl2:/mnt/brick1/data 49152 Y 1703
>> Brick gl3:/mnt/brick1/data 49152 Y 1514
>> Brick gl4:/mnt/brick1/data 49152 Y 1354
>> NFS Server on localhost 2049 Y 2986
>> NFS Server on gl1 2049 Y 1373
>> NFS Server on gl2 2049 Y 1708
>> NFS Server on gl4 2049 Y 1359
>> NFS Server on gl3 2049 Y 1525
>>
>> Task Status of Volume w-vol
>>
------------------------------------------------------------------------------
>> There are no active volume tasks
>>
>> 16:01 gl0(pts/0):/var/log/glusterfs$ gluster volume info
>>
>> Volume Name: w-vol
>> Type: Distribute
>> Volume ID: ebaa67c4-7429-4106-9ab3-dfc85235a2a1
>> Status: Started
>> Number of Bricks: 5
>> Transport-type: tcp
>> Bricks:
>> Brick1: gl0:/mnt/brick1/data
>> Brick2: gl1:/mnt/brick1/data
>> Brick3: gl2:/mnt/brick1/data
>> Brick4: gl3:/mnt/brick1/data
>> Brick5: gl4:/mnt/brick1/data
>> Options Reconfigured:
>> server.allow-insecure: on
>> performance.cache-size: 4GB
>> performance.flush-behind: on
>> diagnostics.client-log-level: WARNING
>>
>>
>>
>>
>> [2015-02-20 15:00:17.071186] I [MSGID: 100030]
>> [glusterfsd.c:2018:main] 0-/usr/sbin/glusterfs: Started running
>> /usr/sbin/glusterfs version 3.6.2 (args: /usr/sbin/glusterfs
>> --acl --direct-io-mode=disable --use-readdirp=no
>> --volfile-server=gl0 --volfile-id=/w-vol /W/Projects)
>> [2015-02-20 15:00:17.076517] E
>> [glusterfsd-mgmt.c:1494:mgmt_getspec_cbk] 0-glusterfs: failed to
>> get the 'volume file' from server
>> [2015-02-20 15:00:17.076575] E
>> [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 0-mgmt: failed to fetch
>> volume file (key:/w-vol)
>> [2015-02-20 15:00:17.076760] W
>> [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum
>> (0), shutting down
>> [2015-02-20 15:00:17.076791] I [fuse-bridge.c:5599:fini] 0-fuse:
>> Unmounting '/W/Projects'.
>> [2015-02-20 15:00:17.110711] W
>> [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum
>> (15), shutting down
>> [2015-02-20 15:01:17.078206] I [MSGID: 100030]
>> [glusterfsd.c:2018:main] 0-/usr/sbin/glusterfs: Started running
>> /usr/sbin/glusterfs version 3.6.2 (args: /usr/sbin/glusterfs
>> --acl --direct-io-mode=disable --use-readdirp=no
>> --volfile-server=gl0 --volfile-id=/w-vol /W/Projects)
>> [2015-02-20 15:01:17.082935] E
>> [glusterfsd-mgmt.c:1494:mgmt_getspec_cbk] 0-glusterfs: failed to
>> get the 'volume file' from server
>> [2015-02-20 15:01:17.082992] E
>> [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 0-mgmt: failed to fetch
>> volume file (key:/w-vol)
>> [2015-02-20 15:01:17.083173] W
>> [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum
>> (0), shutting down
>> [2015-02-20 15:01:17.083203] I [fuse-bridge.c:5599:fini] 0-fuse:
>> Unmounting '/W/Projects'.
>>
>>
>> $ uname -a
>> Linux gl0 3.13.0-45-generic #74-Ubuntu SMP Tue Jan 13 19:36:28
>> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
>>
>> $ lsb_release -a
>> No LSB modules are available.
>> Distributor ID: Ubuntu
>> Description: Ubuntu 14.04.2 LTS
>> Release: 14.04
>> Codename: trusty
>>
>>
>>
>> ii glusterfs-client 3.6.2-ubuntu1~trusty3 amd64 clustered
>> file-system (client package)
>> ii glusterfs-common 3.6.2-ubuntu1~trusty3 amd64 GlusterFS
>> common libraries and translator modules
>> ii glusterfs-server 3.6.2-ubuntu1~trusty3 amd64 clustered
>> file-system (server package)
>>
>>
>>
>> Any idea?
>>
>>
>> 10x
>> tamas
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org <mailto:Gluster-users at
gluster.org>
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20150222/3d634f22/attachment.html>