search for: nfs02

Displaying 13 results from an estimated 13 matches for "nfs02".

Did you mean: nfs01
2017 Aug 25
2
Rolling upgrade from 3.6.3 to 3.10.5
...us' ran on non-upgraded peers Status of volume: gsnfs Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick gs-nfs01:/ftpdata 49154 Y 2931 Brick gs-nfs02:/ftpdata 49152 Y 29875 Brick gs-nfs03:/ftpdata 49153 Y 6987 Brick gs-nfs04:/ftpdata 49153 Y 24768 Self-heal Daemon on localhost N/A Y 2938 Self-heal Daemon...
2017 Aug 25
0
Rolling upgrade from 3.6.3 to 3.10.5
...eers > > Status of volume: gsnfs > Gluster process Port Online Pid > ------------------------------------------------------------------------------ > Brick gs-nfs01:/ftpdata 49154 Y 2931 > Brick gs-nfs02:/ftpdata 49152 Y > 29875 > Brick gs-nfs03:/ftpdata 49153 Y 6987 > Brick gs-nfs04:/ftpdata 49153 Y > 24768 > Self-heal Daemon on localhost N/A Y...
2018 May 22
1
[SOLVED] [Nfs-ganesha-support] volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All, Appears I solved this one and NFS mounts now work on all my clients. No issues since fixing it a few hours back. RESOLUTION Auditd is to blame for the trouble. Noticed this in the logs on 2 of the 3 NFS servers (nfs01, nfs02, nfs03): type=AVC msg=audit(1526965320.850:4094): avc: denied { write } for pid=8714 comm="ganesha.nfsd" name="nfs_0" dev="dm-0" ino=201547689 scontext=system_u:system_r:ganesha_t:s0 tcontext=system_u:object_r:krb5_host_rcache_t:s0 tclass=file type=SYSCALL msg=au...
2017 Aug 25
2
Rolling upgrade from 3.6.3 to 3.10.5
...Port Online > >> > Pid > >> > > >> > ------------------------------------------------------------ > ------------------ > >> > Brick gs-nfs01:/ftpdata 49154 Y > >> > 2931 > >> > Brick gs-nfs02:/ftpdata 49152 Y > >> > 29875 > >> > Brick gs-nfs03:/ftpdata 49153 Y > >> > 6987 > >> > Brick gs-nfs04:/ftpdata 49153 Y > >> > 24768 >...
2018 May 08
1
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...Hey Guy's, Returning to this topic, after disabling the the quorum: cluster.quorum-type: none cluster.server-quorum-type: none I've ran into a number of gluster errors (see below). I'm using gluster as the backend for my NFS storage. I have gluster running on two nodes, nfs01 and nfs02. It's mounted on /n on each host. The path /n is in turn shared out by NFS Ganesha. It's a two node setup with quorum disabled as noted below: [root at nfs02 ganesha]# mount|grep gv01 nfs02:/gv01 on /n type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_o...
2017 Aug 25
0
Rolling upgrade from 3.6.3 to 3.10.5
...id >> >> > >> >> > >> >> > ------------------------------------------------------------------------------ >> >> > Brick gs-nfs01:/ftpdata 49154 Y >> >> > 2931 >> >> > Brick gs-nfs02:/ftpdata 49152 Y >> >> > 29875 >> >> > Brick gs-nfs03:/ftpdata 49153 Y >> >> > 6987 >> >> > Brick gs-nfs04:/ftpdata 49153 Y >> >...
2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...Type: Replicate >> Volume ID: e5ccc75e-5192-45ac-b410-a34ebd777666 >> Status: Started >> Snapshot Count: 0 >> Number of Bricks: 1 x 2 = 2 >> Transport-type: tcp >> Bricks: >> Brick1: nfs01:/bricks/0/gv01 >> Brick2: nfs02:/bricks/0/gv01 >> Options Reconfigured: >> transport.address-family: inet >> nfs.disable: on >> performance.client-io-threads: off >> nfs.trusted-sync: on >> performance.cache-size: 1GB >> performance.io-thread-count: 16 >>...
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...> Volume Name: gv01 > Type: Replicate > Volume ID: e5ccc75e-5192-45ac-b410-a34ebd777666 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: nfs01:/bricks/0/gv01 > Brick2: nfs02:/bricks/0/gv01 > Options Reconfigured: > transport.address-family: inet > nfs.disable: on > performance.client-io-threads: off > nfs.trusted-sync: on > performance.cache-size: 1GB > performance.io-thread-count: 16 > performance.write-behind-wi...
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...e are no active volume tasks [root at nfs01 /]# [root at nfs01 /]# gluster volume info Volume Name: gv01 Type: Replicate Volume ID: e5ccc75e-5192-45ac-b410-a34ebd777666 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: nfs01:/bricks/0/gv01 Brick2: nfs02:/bricks/0/gv01 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off nfs.trusted-sync: on performance.cache-size: 1GB performance.io-thread-count: 16 performance.write-behind-window-size: 8MB performance.readdir-ahead: on client.event-threads: 8 ser...
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...ot at nfs01 /]# gluster volume info > > Volume Name: gv01 > Type: Replicate > Volume ID: e5ccc75e-5192-45ac-b410-a34ebd777666 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: nfs01:/bricks/0/gv01 > Brick2: nfs02:/bricks/0/gv01 > Options Reconfigured: > transport.address-family: inet > nfs.disable: on > performance.client-io-threads: off > nfs.trusted-sync: on > performance.cache-size: 1GB > performance.io-thread-count: 16 > performance.write-behind-window-size: 8MB > performance....
1999 Jan 15
4
Newbie questions
Hi Folks, I am a relatively inexperienced Solaris system admin. We have Samba version 1.9.15p8 running on our server currently. I assume that it would be worthwhile for me to upgrade to 2.0. However, I only generally know about how to keep Samba running, and very little about installation since someone else installed it. Unfortunately, I can't seem
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
...Tried a few tweak options with no effect: [root at nfs01 glusterfs]# gluster volume info Volume Name: gv01 Type: Replicate Volume ID: e5ccc75e-5192-45ac-b410-a34ebd777666 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: nfs01:/bricks/0/gv01 Brick2: nfs02:/bricks/0/gv01 Options Reconfigured: cluster.server-quorum-type: server cluster.quorum-type: auto server.event-threads: 8 client.event-threads: 8 performance.readdir-ahead: on performance.write-behind-window-size: 8MB performance.io-thread-count: 16 performance.cache-size: 1GB nfs.trusted-sync: on...
2018 Mar 19
2
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi, As I posted in my previous emails - glusterfs can never match NFS (especially async one) performance of small files/latency. That's given by the design. Nothing you can do about it. Ondrej -----Original Message----- From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Rik Theys Sent: Monday, March 19, 2018 10:38 AM To: gluster-users at