search for: cn02

Displaying 9 results from an estimated 9 matches for "cn02".

Did you mean: c02
2018 Mar 12
2
Can't heal a volume: "Please check if all brick processes are running."
...peration? # gluster volume status Status of volume: gv0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y 70850 Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y 102951 Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y 57535 Brick cn04-ib:/gfs/gv0/brick1/brick 0 49152 Y 56676 Brick cn05-ib:/gfs/gv0/brick1/brick 0 49152 Y...
2018 Mar 13
4
Can't heal a volume: "Please check if all brick processes are running."
...>> Gluster process TCP Port RDMA Port Online >> Pid >> ------------------------------------------------------------ >> ------------------ >> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y >> 70850 >> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y >> 102951 >> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y >> 57535 >> Brick cn04-ib:/gfs/gv0/brick1/brick 0 49152 Y >> 56676 >> Brick cn05-ib:/gfs/gv0/brick1/bri...
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
...> Status of volume: gv0 > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------------------------ > Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y > 70850 > Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y > 102951 > Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y > 57535 > Brick cn04-ib:/gfs/gv0/brick1/brick 0 49152 Y > 56676 > Brick cn05-ib:/gfs/gv0/brick1/brick...
2018 Mar 14
2
Can't heal a volume: "Please check if all brick processes are running."
...s TCP Port RDMA Port Online >>> Pid >>> ------------------------------------------------------------ >>> ------------------ >>> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y >>> 70850 >>> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y >>> 102951 >>> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y >>> 57535 >>> Brick cn04-ib:/gfs/gv0/brick1/brick 0 49152 Y >>> 56676 >>> Brick cn0...
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
...s TCP Port RDMA Port Online >>> Pid >>> ------------------------------------------------------------ >>> ------------------ >>> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y >>> 70850 >>> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y >>> 102951 >>> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y >>> 57535 >>> Brick cn04-ib:/gfs/gv0/brick1/brick 0 49152 Y >>> 56676 >>> Brick cn0...
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
...lume status > Status of volume: gv0 > Gluster process TCP Port RDMA Port Online Pid > ------------------------------------------------------------------------------ > Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y 70850 > Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y 102951 > Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y 57535 > Brick cn04-ib:/gfs/gv0/brick1/brick 0 49152 Y 56676 > Brick cn05-ib:/gfs/gv0/brick1/brick 0 491...
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
...TCP Port RDMA Port >>>> Online Pid >>>> ------------------------------------------------------------ >>>> ------------------ >>>> Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y >>>> 70850 >>>> Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y >>>> 102951 >>>> Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y >>>> 57535 >>>> Brick cn04-ib:/gfs/gv0/brick1/brick 0 49152 Y >>>> 56676 &g...
2018 May 04
0
Crashing applications, RDMA_ERROR in logs
...is? # gluster volume status gv0 Status of volume: gv0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick cn01-ib:/gfs/gv0/brick1/brick 0 49152 Y 3984 Brick cn02-ib:/gfs/gv0/brick1/brick 0 49152 Y 3352 Brick cn03-ib:/gfs/gv0/brick1/brick 0 49152 Y 3333 Brick cn04-ib:/gfs/gv0/brick1/brick 0 49152 Y 3079 Brick cn05-ib:/gfs/gv0/brick1/brick 0 49152 Y...
2012 Oct 11
0
Quota system over NFS
I've been trying to implement the quota system for a Linux cluster. I'm trying it out on a mini-cluster of 4 CentOS 6.3 VMs. I've named them: lion-login lion-sn1 lion-cn01 lion-cn02 All nodes are on the virtual subnet 192.168.56.0/24. lion-sn1 (storage node 1) is the NFS server and the other three nodes are the NFS clients. /etc/exports on lion-sn1: /home 192.168.56.0/24(rw) /etc/fstab on the other nodes: lion-sn1:/home /home nfs defaults,usrquota 0 2 User quota...