search for: volfile

Displaying 20 results from an estimated 266 matches for "volfile".

2017 Jun 04
2
Rebalance + VM corruption - current status and request for feedback
...ll running 3.7.20, as confirmed by >> the occurrence of the following log message: >> >> [2017-05-26 08:58:23.647458] I [MSGID: 100030] [glusterfsd.c:2338:main] >> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 >> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 >> --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol >> /rhev/data-center/mnt/glusterSD/s1:_testvol) >> [2017-05-26 08:58:40.901204] I [MSGID: 100030] [glusterfsd.c:2338:main] >> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glust...
2017 Jun 06
2
Rebalance + VM corruption - current status and request for feedback
...unt) seems to be still running 3.7.20, as confirmed by > the occurrence of the following log message: > > [2017-05-26 08:58:23.647458] I [MSGID: 100030] [glusterfsd.c:2338:main] > 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 > (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 > --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol > /rhev/data-center/mnt/glusterSD/s1:_testvol) > [2017-05-26 08:58:40.901204] I [MSGID: 100030] [glusterfsd.c:2338:main] > 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7...
2017 Jun 05
0
Rebalance + VM corruption - current status and request for feedback
...s confirmed by >>> the occurrence of the following log message: >>> >>> [2017-05-26 08:58:23.647458] I [MSGID: 100030] [glusterfsd.c:2338:main] >>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 >>> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 >>> --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol >>> /rhev/data-center/mnt/glusterSD/s1:_testvol) >>> [2017-05-26 08:58:40.901204] I [MSGID: 100030] [glusterfsd.c:2338:main] >>> 0-/usr/sbin/glusterfs: Started running...
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
...>> by the occurrence of the following log message: >>>> >>>> [2017-05-26 08:58:23.647458] I [MSGID: 100030] [glusterfsd.c:2338:main] >>>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 >>>> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 >>>> --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol >>>> /rhev/data-center/mnt/glusterSD/s1:_testvol) >>>> [2017-05-26 08:58:40.901204] I [MSGID: 100030] [glusterfsd.c:2338:main] >>>> 0-/usr/sbin/glusterfs:...
2013 Jul 07
1
Getting ERROR: parsing the volfile failed (No such file or directory) when starting glusterd on Fedora 19
...systemd[1]: Started GlusterFS an clustered file-system server. Jul 7 06:18:30 chicago-fw1 systemd[1]: Starting GlusterFS an clustered file-system server... Jul 7 06:18:30 chicago-fw1 glusterfsd[1728]: [2013-07-07 11:18:30.717075] C [glusterfsd.c:1374:parse_cmdline] 0-glusterfs: ERROR: parsing the volfile failed (No such file or directory) Jul 7 06:18:30 chicago-fw1 glusterfsd[1728]: USAGE: /usr/sbin/glusterfsd [options] [mountpoint] Jul 7 06:18:30 chicago-fw1 GlusterFS[1728]: [2013-07-07 11:18:30.717075] C [glusterfsd.c:1374:parse_cmdline] 0-glusterfs: ERROR: parsing the volfile failed (No such f...
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
...ages. So your client (mount) seems to be still running 3.7.20, as confirmed by the occurrence of the following log message: [2017-05-26 08:58:23.647458] I [MSGID: 100030] [glusterfsd.c:2338:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol /rhev/data-center/mnt/glusterSD/s1:_testvol) [2017-05-26 08:58:40.901204] I [MSGID: 100030] [glusterfsd.c:2338:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 (args: /usr/sbin...
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
...ll running 3.7.20, as confirmed by >> the occurrence of the following log message: >> >> [2017-05-26 08:58:23.647458] I [MSGID: 100030] [glusterfsd.c:2338:main] >> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.20 >> (args: /usr/sbin/glusterfs --volfile-server=s1 --volfile-server=s2 >> --volfile-server=s3 --volfile-server=s4 --volfile-id=/testvol >> /rhev/data-center/mnt/glusterSD/s1:_testvol) >> [2017-05-26 08:58:40.901204] I [MSGID: 100030] [glusterfsd.c:2338:main] >> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glust...
2012 Nov 30
3
Cannot mount gluster volume
...eas-data-0-0 Brick2: gluster-0-1:/mseas-data-0-1 Brick3: gluster-data:/data [root at mseas-data ~]# ps -ef | grep gluster root 2783 1 0 Nov29 ? 00:01:15 /usr/sbin/glusterd -p /var/run/glusterd.pid root 2899 1 0 Nov29 ? 00:00:00 /usr/sbin/glusterfsd -s localhost --volfile-id gdata.gluster-data.data -p /var/lib/glusterd/vols/gdata/run/gluster-data-data.pid -S /tmp/e3eac7ce95e786a3d909b8fc65ed2059.socket --brick-name /data -l /var/log/glusterfs/bricks/data.log --xlator-option *-posix.glusterd-uuid=22f1102a-08e6-482d-ad23-d8e063cf32ed --brick-port 24009 --xlator-option...
2017 Jul 30
1
Lose gnfs connection during test
...ernel: nfs: server 10.147.4.99 not responding, still trying Here is the error message in nfs.log for gluster: 19:26:18.440498] I [rpc-drc.c:689:rpcsvc_drc_init] 0-rpc-service: DRC is turned OFF [2017-07-30 19:26:18.450180] I [glusterfsd-mgmt.c:1620:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing [2017-07-30 19:26:18.493551] I [glusterfsd-mgmt.c:1620:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing [2017-07-30 19:26:18.545959] I [glusterfsd-mgmt.c:1620:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing [2017-07-30 19:42:29.704707] I [glusterfsd-mgmt...
2017 Dec 13
1
'ERROR: parsing the volfile failed' on fresh install
...with error code. See "systemctl status glusterd.service" and "journalctl -xe" for details. $ sudo journalctl -xe | tail -- Unit glusterd.service has begun starting up. Dec 12 20:17:46 u101410 GlusterFS[29360]: [glusterfsd.c:2004:parse_cmdline] 0-glusterfs: ERROR: parsing the volfile failed [No such file or directory] Dec 12 20:17:46 u101410 glusterd[29360]: USAGE: /usr/sbin/glusterd [options] [mountpoint] Dec 12 20:17:46 u101410 systemd[1]: glusterd.service: Control process exited, code=exited status=255 Dec 12 20:17:46 u101410 systemd[1]: Failed to start GlusterFS, a clustere...
2015 May 16
4
fault tolerance
Hi people, Now I am using the version of gluster 3.6.2 and I want configure the system for fault tolerance. The point is that I want have two server in replication mode and if one server down the client do not note the fault. How I need import the system in the client for this purpose.
2017 Dec 15
0
'ERROR: parsing the volfile failed' on fresh install
...nd "journalctl -xe" >> for details. >> >> >> >> $ sudo journalctl -xe | tail >> -- Unit glusterd.service has begun starting up. >> Dec 12 20:17:46 u101410 GlusterFS[29360]: [glusterfsd.c:2004:parse_cmdline] >> 0-glusterfs: ERROR: parsing the volfile failed [No such file or directory] >> Dec 12 20:17:46 u101410 glusterd[29360]: USAGE: /usr/sbin/glusterd >> [options] [mountpoint] >> Dec 12 20:17:46 u101410 systemd[1]: glusterd.service: Control process >> exited, code=exited status=255 >> Dec 12 20:17:46 u101410 syst...
2018 Jan 25
2
Run away memory with gluster mount
...ptions in there, (increased cache-size, md invalidation, etc) but stripped them out in an attempt to isolate the issue. Still got the problem without them. The volume currently contains over 1M files. When mounting the volume, I get (among other things) a process as such: /usr/sbin/glusterfs --volfile-server=localhost --volfile-id=/GlusterWWW /var/www This process begins with little memory, but then as files are accessed in the volume the memory increases. I setup a script that simply reads the files in the volume one at a time (no writes). It's been running on and off about 12 hours now...
2012 Oct 05
0
No subject
...data-0-0 Brick2: gluster-0-1:/mseas-data-0-1 Brick3: gluster-data:/data [root at mseas-data ~]# ps -ef | grep gluster root 2783 1 0 Nov29 ? 00:01:15 /usr/sbin/glusterd -p /var/= run/glusterd.pid root 2899 1 0 Nov29 ? 00:00:00 /usr/sbin/glusterfsd -s loc= alhost --volfile-id gdata.gluster-data.data -p /var/lib/glusterd/vols/gdata= /run/gluster-data-data.pid -S /tmp/e3eac7ce95e786a3d909b8fc65ed2059.socket = --brick-name /data -l /var/log/glusterfs/bricks/data.log --xlator-option *-= posix.glusterd-uuid=3D22f1102a-08e6-482d-ad23-d8e063cf32ed --brick-port 240= 09 --xla...
2023 Feb 23
1
Big problems after update to 9.6
...or=auto gluster root at br:~# ps -ef | grep gluster root 2052 1 0 2022 ? 00:00:00 /usr/bin/python3 /usr/sbin/glustereventsd --pid-file /var/run/glustereventsd.pid root 2062 1 3 2022 ? 10-11:57:16 /usr/sbin/glusterfs --fuse-mountopts=noatime --process-name fuse --volfile-server=br --volfile-server=sg --volfile-id=/gvol0 --fuse-mountopts=noatime /mnt/glusterfs root 2379 2052 0 2022 ? 00:00:00 /usr/bin/python3 /usr/sbin/glustereventsd --pid-file /var/run/glustereventsd.pid root 5884 1 5 2022 ? 18-16:08:53 /usr/sbin/glusterfsd -s br --...
2018 Jan 25
0
Run away memory with gluster mount
...options in there, (increased cache-size, md invalidation, etc) but stripped them out in an attempt to isolate the issue. Still got the problem without them. The volume currently contains over 1M files. When mounting the volume, I get (among other things) a process as such: /usr/sbin/glusterfs --volfile-server=localhost --volfile-id=/GlusterWWW /var/www This process begins with little memory, but then as files are accessed in the volume the memory increases. I setup a script that simply reads the files in the volume one at a time (no writes). It's been running on and off about 12 hours now a...
2009 Aug 11
3
Fuse problem
Hello all, I'm running a 64bit Centos5 setup and am trying to mount a gluster filesystem (which is exported out of the same box). glusterfs --debug --volfile=/root/gluster/webspace2.vol /home/webspace_glust/ Gives me: <snip> [2009-08-11 16:26:37] D [client-protocol.c:5963:init] glust1b_36: defaulting ping-timeout to 10 [2009-08-11 16:26:37] D [transport.c:141:transport_load] transport: attempt to load file /usr/lib64/glusterfs/2.0.4/transport/so...
2017 Oct 17
2
Gluster processes remaining after stopping glusterd
...ster processes in my host? That's we see after run the command: *************************************************************************************************** [root at xxxxxx ~]# ps -ef | grep -i glu root 1825 1 0 Oct05 ? 00:05:07 /usr/sbin/glusterfsd -s dvihcasc0s --volfile-id advdemo.dvihcasc0s.opt-glusterfs-advdemo -p /var/lib/glusterd/vols/advdemo/run/dvihcasc0s-opt-glusterfs-advdemo.pid -S /var/run/gluster/b7cbd8cac308062ef1ad823a3abf54f5.socket --brick-name /opt/glusterfs/advdemo -l /var/log/glusterfs/bricks/opt-glusterfs-advdemo.log --xlator-option *-posix.glust...
2018 Jan 26
2
Run away memory with gluster mount
...e, > md invalidation, etc) but stripped them out in an attempt to isolate > the issue. Still got the problem without them. > > The volume currently contains over 1M files. > > When mounting the volume, I get (among other things) a process as such: > > /usr/sbin/glusterfs --volfile-server=localhost > --volfile-id=/GlusterWWW /var/www > > This process begins with little memory, but then as files are accessed > in the volume the memory increases. I setup a script that simply reads > the files in the volume one at a time (no writes). It's been running >...
2024 Feb 05
1
Graceful shutdown doesn't stop all Gluster processes
...there any specific sequence or trick I need to follow? Currently, I am using the following command: [root at master2 ~]# systemctl stop glusterd.service [root at master2 ~]# ps aux | grep gluster root 2710138 14.1 0.0 2968372 216852 ? Ssl Jan27 170:27 /usr/sbin/glusterfsd -s master2 --volfile-id tier1data.master2.opt-tier1data2019-brick -p /var/run/gluster/vols/tier1data/master2-opt-tier1data2019-brick.pid -S /var/run/gluster/97da28e3d5c23317.socket --brick-name /opt/tier1data2019/brick -l /var/log/glusterfs/bricks/opt-tier1data2019-brick.log --xlator-option *-posix.glusterd-uuid=c1591b...