Displaying 20 results from an estimated 417 matches for "glusterfsd".
Did you mean:
glusterfs
2013 Jul 07
1
Getting ERROR: parsing the volfile failed (No such file or directory) when starting glusterd on Fedora 19
...temctl start glusterd.service
[root at chicago-fw1 system]# tail /var/log/messages
Jul 7 06:18:28 chicago-fw1 dbus-daemon[508]: dbus[508]: [system] Successfully activated service 'org.fedoraproject.Setroubleshootd'
Jul 7 06:18:29 chicago-fw1 setroubleshoot: SELinux is preventing /usr/sbin/glusterfsd from name_bind access on the tcp_socket . For complete SELinux messages. run sealert -l 6ef33b0e-94fc-4eba-8a11-f594985ba312
Jul 7 06:18:30 chicago-fw1 systemd[1]: Started GlusterFS an clustered file-system server.
Jul 7 06:18:30 chicago-fw1 systemd[1]: Starting GlusterFS an clustered file-system...
2012 Sep 18
1
glusterd vs. glusterfsd
I'm running version 3.3.0 on Fedora16-x86_64. The official(?) RPMs
ship two init scripts, glusterd and glusterfsd. I've googled a bit,
and I can't figure out what the purpose is for each of them. I know
that I need one of them, but I can't tell which for sure. There's no
man page for either, and running them with --help returns the same
exact output. Do they have separate purposes? Do I on...
2013 Jun 03
2
recovering gluster volume || startup failure
...on Centos 6.1 transport: TCP
sharing volume over NFS for VM storage - VHD Files
Type: distributed - only 1 node (brick)
XFS (LVM)
mount /dev/datastore1/mylv1 /export/brick1 -?mounts VHD files.......is there a way to recover these files?
?cat export-brick1.log
[2013-06-02 09:29:00.832914] I [glusterfsd.c:1666:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.3.1
[2013-06-02 09:29:00.845515] I [graph.c:241:gf_add_cmdline_options] 0-gvol1-server: adding option 'listen-port' for volume 'gvol1-server' with value '24009'
[2013-06-02 09:29:00.845558] I...
2017 Jul 30
1
Lose gnfs connection during test
...0721 18:53:28.873 /var/log/messages: Jul 30 18:53:02 localhost_10
kernel: nfs: server 10.147.4.99 not responding, still trying
Here is the error message in nfs.log for gluster:
19:26:18.440498] I [rpc-drc.c:689:rpcsvc_drc_init] 0-rpc-service: DRC
is turned OFF
[2017-07-30 19:26:18.450180] I [glusterfsd-mgmt.c:1620:mgmt_getspec_cbk]
0-glusterfs: No change in volfile, continuing
[2017-07-30 19:26:18.493551] I [glusterfsd-mgmt.c:1620:mgmt_getspec_cbk]
0-glusterfs: No change in volfile, continuing
[2017-07-30 19:26:18.545959] I [glusterfsd-mgmt.c:1620:mgmt_getspec_cbk]
0-glusterfs: No change in vo...
2011 Apr 07
0
TCP connection incresement when reconfigure and question about multi-graph
Hi, all.
I set up a dht system, and sent a HUP signal to client to trigger the reconfiguration.
But i found that the TCP connection established increased by the number
of bricks(the number of glusterfsd progress).
$ ps -ef | grep glusterfs
root 8579 1 0 11:28 ? 00:00:00 glusterfsd -f /home/huz/dht/server.vol -l /home/huz/dht/server.log -L TRACE
root 8583 1 0 11:28 ? 00:00:00 glusterfsd -f /home/huz/dht/server2.vol -l /home/huz/dht/server2.log -L TRACE
root...
2013 Feb 08
1
GlusterFS OOM Issue
Hello,
I am running GlusterFS version 3.2.7-2~bpo60+1 on Debian 6.0.6. Today, I
have experienced a a glusterfs process cause the server to invoke
oom_killer.
How exactly would I go about investigating this and coming up with a fix?
--
Steve King
Network/Linux Engineer - AdSafe Media
Cisco Certified Network Professional
CompTIA Linux+ Certified Professional
CompTIA A+ Certified Professional
2018 Apr 16
1
lstat & readlink calls during glusterfsd process startup
Hi all,
I am on gluster 3.10.5 with one EC volume 16+4.
One of the machines go down previous night and I just fixed it and powered on.
When glusterfsd processes started they consume all CPU on the server.
strace shows every process goes over in bricks directory and do a
lstat & readlink calls.
Each brick directory is 8TB, %60 full. I waited for 24 hours for it to
finish but it did not.
I stopped glusterd and restarted it but same thing happen...
2017 Nov 13
2
snapshot mount fails in 3.12
...hat
were not mentioned in the release notes for snapshot mounting?
I recently upgraded from 3.10 to 3.12 on CentOS (using
centos-release-gluster312). The upgrade worked flawless. The volume
works fine too. But mounting a snapshot fails with those two error messages:
[2017-11-13 08:46:02.300719] E [glusterfsd-mgmt.c:1796:mgmt_getspec_cbk]
0-glusterfs: failed to get the 'volume file' from server
[2017-11-13 08:46:02.300744] E [glusterfsd-mgmt.c:1932:mgmt_getspec_cbk]
0-mgmt: failed to fetch volume file (key:snaps)
Up to the mounting everything works as before:
# gluster snapshot create test home...
2017 Jun 29
3
Some bricks are offline after restart, how to bring them online gracefully?
...me brick ? it?s random issue.
I checked log on affected servers and this is an example:
sudo tail /var/log/glusterfs/bricks/st-brick3-0.log
[2017-06-29 17:59:48.651581] W [socket.c:593:__socket_rwv] 0-glusterfs:
readv on 10.2.44.23:24007 failed (No data available)
[2017-06-29 17:59:48.651622] E [glusterfsd-mgmt.c:2114:mgmt_rpc_notify]
0-glusterfsd-mgmt: failed to connect with remote-host: glunode0 (No data
available)
[2017-06-29 17:59:48.651638] I [glusterfsd-mgmt.c:2133:mgmt_rpc_notify]
0-glusterfsd-mgmt: Exhausted all volfile servers
[2017-06-29 17:59:49.944103] W [glusterfsd.c:1332:cleanup_and_exi...
2017 Jul 20
1
Error while mounting gluster volume
...ing the below logs
[1970-01-02 10:54:04.420065] E [MSGID: 101187]
[event-epoll.c:391:event_register_epoll] 0-epoll: failed to add fd(=7) to
epoll fd(=0) [Invalid argument]
[1970-01-02 10:54:04.420140] W [socket.c:3095:socket_connect] 0-: failed to
register the event
[1970-01-02 10:54:04.420406] E [glusterfsd-mgmt.c:1818:mgmt_rpc_notify]
0-glusterfsd-mgmt: failed to connect with remote-host: 128.224.95.140
(Success)
[1970-01-02 10:54:04.420422] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[1970-01-02 10:54:04.420429] I [glusterfsd-mgmt.c:1824:mgm...
2017 Aug 20
2
Glusterd not working with systemd in redhat 7
...var/log/gluster.log 0 0
web1.dasilva.network:/etc /mnt/glusterfs/etc glusterfs
defaults,_netdev,log-level=debug,log-file=/var/log/gluster.log 0 0
Here is the logfile /var/log/gluster.log
--8<--start of log file ----------------------------------
[2017-08-20 20:30:39.638989] I [MSGID: 100030] [glusterfsd.c:2476:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.11.2
(args: /usr/sbin/glusterfs --log-level=DEBUG
--log-file=/var/log/gluster.log --volfile-server=web1.dasilva.network
--volfile-id=/www /mnt/glusterfs/www)
[2017-08-20 20:30:39.639024] I [MSGID: 100030] [glusterfsd....
2017 Nov 13
0
snapshot mount fails in 3.12
...release notes for snapshot mounting?
> I recently upgraded from 3.10 to 3.12 on CentOS (using
> centos-release-gluster312). The upgrade worked flawless. The volume
> works fine too. But mounting a snapshot fails with those two error
> messages:
>
> [2017-11-13 08:46:02.300719] E [glusterfsd-mgmt.c:1796:mgmt_getspec_cbk]
> 0-glusterfs: failed to get the 'volume file' from server
> [2017-11-13 08:46:02.300744] E [glusterfsd-mgmt.c:1932:mgmt_getspec_cbk]
> 0-mgmt: failed to fetch volume file (key:snaps)
>
> Up to the mounting everything works as before:
> # glus...
2017 Sep 08
2
GlusterFS as virtual machine storage
...13:21 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>:
> Gandalf, isn't possible server hard-crash too much? I mean if reboot
> reliably kills the VM, there is no doubt network crash or poweroff
> will as well.
IIUP, the only way to keep I/O running is to gracefully exiting glusterfsd.
killall should send signal 15 (SIGTERM) to the process, maybe a bug in
signal management
on gluster side? Because kernel is already telling glusterfsd to exit,
though signal 15 but glusterfsd
seems to handle this in a bad way.
a server hard-crash doesn't send any signal. I think this could be...
2017 Jun 30
0
Some bricks are offline after restart, how to bring them online gracefully?
...t; I checked log on affected servers and this is an example:
>
> sudo tail /var/log/glusterfs/bricks/st-brick3-0.log
>
> [2017-06-29 17:59:48.651581] W [socket.c:593:__socket_rwv] 0-glusterfs:
> readv on 10.2.44.23:24007 failed (No data available)
> [2017-06-29 17:59:48.651622] E [glusterfsd-mgmt.c:2114:mgmt_rpc_notify]
> 0-glusterfsd-mgmt: failed to connect with remote-host: glunode0 (No data
> available)
> [2017-06-29 17:59:48.651638] I [glusterfsd-mgmt.c:2133:mgmt_rpc_notify]
> 0-glusterfsd-mgmt: Exhausted all volfile servers
> [2017-06-29 17:59:49.944103] W [glusterf...
2017 Aug 21
0
Glusterd not working with systemd in redhat 7
...defaults,_netdev,log-level=debug,log-file=/var/log/gluster.log 0 0
>
> Here is the logfile /var/log/gluster.log
>
Could you point us to the glusterd log file content?
> --8<--start of log file ----------------------------------
> [2017-08-20 20:30:39.638989] I [MSGID: 100030] [glusterfsd.c:2476:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.11.2
> (args: /usr/sbin/glusterfs --log-level=DEBUG --log-file=/var/log/gluster.log
> --volfile-server=web1.dasilva.network --volfile-id=/www
> /mnt/glusterfs/www)
> [2017-08-20 20:30:39.639024] I [MS...
2013 May 24
0
Problem After adding Bricks
....0%hi, 0.0%si, 0.0%st
Mem: 16405712k total, 16310088k used, 95624k free, 12540824k buffers
Swap: 1999868k total, 9928k used, 1989940k free, 656604k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2460 root 20 0 391m 38m 1616 S 250 0.2 4160:51 glusterfsd
2436 root 20 0 392m 40m 1624 S 243 0.3 4280:26 glusterfsd
2442 root 20 0 391m 39m 1620 S 187 0.2 3933:46 glusterfsd
2454 root 20 0 391m 36m 1620 S 118 0.2 3870:23 glusterfsd
2448 root 20 0 391m 38m 1624 S 110 0.2 3720:50 glusterfsd
2472 root...
2023 Mar 14
1
can't set up geo-replication: can't fetch slave details
...ansible
geoaccount at glusterX::ansible create push-pem
Unable to mount and fetch slave volume details. Please check the log:
/var/log/glusterfs/geo-replication/gverify-slavemnt.log
geo-replication command failed
That log file contained this:
[2023-03-14 19:13:48.904461 +0000] I [MSGID: 100030]
[glusterfsd.c:2685:main] 0-glusterfs: Started running version
[{arg=glusterfs}, {version=9.2}, {cmdlinestr=glusterfs --xlator-
option=*dht.lookup-unhashed=off --volfile-server glusterX --volfile-id
ansible -l /var/log/glusterfs/geo-replication/gverify-slavemnt.log
/tmp/gverify.sh.txIgka}]
[2023-03-14 19:13:48...
2011 Oct 25
1
problems with gluster 3.2.4
Hi, we have 4 test machines (gluster01 to gluster04).
I've created a replicated volume with the 4 machines.
Then on the client machine i've executed:
mount -t glusterfs gluster01:/volume01 /mnt/gluster
And everything works ok.
The main problem occurs in every client machine that I do:
umount /mnt/gluster
and the
mount -t glusterfs gluster01:/volume01 /mnt/gluster
The client
2008 Dec 20
1
glusterfs1.4rc6 bdb problem
...otocol.c:7509:validate_auth_options]
bdbserver: volume 'bdb' defined as subvolume, but no authentication defined
for the same
2008-12-20 20:57:52 E [xlator.c:563:xlator_init_rec] xlator: initialization of
volume 'bdbserver' failed, review your volfile again
2008-12-20 20:57:52 E [glusterfsd.c:459:_xlator_graph_init] bdbserver:
initializing translator failed
2008-12-20 20:57:52 E [glusterfsd.c:1061:main] glusterfs: translator
initialization failed. exiting
2017 Jun 30
2
Some bricks are offline after restart, how to bring them online gracefully?
...ervers and this is an example:
> >
> > sudo tail /var/log/glusterfs/bricks/st-brick3-0.log
> >
> > [2017-06-29 17:59:48.651581] W [socket.c:593:__socket_rwv] 0-glusterfs:
> > readv on 10.2.44.23:24007 failed (No data available)
> > [2017-06-29 17:59:48.651622] E [glusterfsd-mgmt.c:2114:mgmt_rpc_notify]
> > 0-glusterfsd-mgmt: failed to connect with remote-host: glunode0 (No data
> > available)
> > [2017-06-29 17:59:48.651638] I [glusterfsd-mgmt.c:2133:mgmt_rpc_notify]
> > 0-glusterfsd-mgmt: Exhausted all volfile servers
> > [2017-06-29 17:...