I have a two node cluster setup with iscsi using the image files that are stored on the gluster cluster as LUNs. They do appear to be syncing, but I have a few questions and I appreciate any help you can give me. Thanks for your time! 1) Why does the second brick show as N for online? 2) Why is the healer daemon shown as NA? How can I correct that if it needs to be corrected? 3) Should i really be mounting the gluster volumes on each gluster node for iscsi access or should i be accessing /var/gluster-storage directly? 4) If i only have about 72GB of files stored in gluster, why is each gluster host about 155GB? Are their duplicates stored somewhere and why? root at gluster1:~# gluster volume status volume1 Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gluster1:/var/gluster-storage 49152 0 Y 3043 Brick gluster2:/var/gluster-storage N/A N/A N N/A NFS Server on localhost 2049 0 Y 3026 Self-heal Daemon on localhost N/A N/A Y 3034 NFS Server on gluster2 2049 0 Y 2738 Self-heal Daemon on gluster2 N/A N/A Y 2743 Task Status of Volume volume1 ------------------------------------------------------------------------------ There are no active volume tasks root at gluster1:~# gluster peer status Number of Peers: 1 Hostname: gluster2 Uuid: abe7ee21-bea9-424f-ac5c-694bdd989d6b State: Peer in Cluster (Connected) root at gluster1:~# root at gluster1:~# mount | grep gluster gluster1:/volume1 on /mnt/glusterfs type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) root at gluster2:~# gluster volume status volume1 Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gluster1:/var/gluster-storage 49152 0 Y 3043 Brick gluster2:/var/gluster-storage N/A N/A N N/A NFS Server on localhost 2049 0 Y 2738 Self-heal Daemon on localhost N/A N/A Y 2743 NFS Server on gluster1.mgr.example.com 2049 0 Y 3026 Self-heal Daemon on gluster1.mgr.example.co m N/A N/A Y 3034 Task Status of Volume volume1 ------------------------------------------------------------------------------ There are no active volume tasks root at gluster2:~# gluster peer status Number of Peers: 1 Hostname: gluster1.mgr.example.com Uuid: dff9118b-a2bd-4cd8-b562-0dfdbd2ea8a3 State: Peer in Cluster (Connected) root at gluster2:~# root at gluster2:~# mount | grep gluster gluster1:/volume1 on /mnt/glusterfs type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) root at gluster2:~#
-Atin Sent from one plus one On Jan 18, 2016 11:41 AM, "Mark Chaney" <mail at lists.macscr.com> wrote:> > I have a two node cluster setup with iscsi using the image files that arestored on the gluster cluster as LUNs. They do appear to be syncing, but I have a few questions and I appreciate any help you can give me. Thanks for your time!> > 1) Why does the second brick show as N for online? > 2) Why is the healer daemon shown as NA? How can I correct that if itneeds to be corrected? SHD doesn't need to listen to any specific port, and its showing online, so no issues.>From the status output it looks like brick hasn't started in gluster2 node.Could you check/send glusterd and brick log from gluster2 node?> 3) Should i really be mounting the gluster volumes on each gluster nodefor iscsi access or should i be accessing /var/gluster-storage directly?> 4) If i only have about 72GB of files stored in gluster, why is eachgluster host about 155GB? Are their duplicates stored somewhere and why?> > root at gluster1:~# gluster volume status volume1 > Status of volume: volume1 > Gluster process TCP Port RDMA Port OnlinePid>------------------------------------------------------------------------------> Brick gluster1:/var/gluster-storage 49152 0 Y3043> Brick gluster2:/var/gluster-storage N/A N/A NN/A> NFS Server on localhost 2049 0 Y3026> Self-heal Daemon on localhost N/A N/A Y3034> NFS Server on gluster2 2049 0 Y2738> Self-heal Daemon on gluster2 N/A N/A Y2743> > Task Status of Volume volume1 >------------------------------------------------------------------------------> There are no active volume tasks > > root at gluster1:~# gluster peer status > Number of Peers: 1 > > Hostname: gluster2 > Uuid: abe7ee21-bea9-424f-ac5c-694bdd989d6b > State: Peer in Cluster (Connected) > root at gluster1:~# > root at gluster1:~# mount | grep gluster > gluster1:/volume1 on /mnt/glusterfs type fuse.glusterfs(rw,default_permissions,allow_other,max_read=131072)> > > root at gluster2:~# gluster volume status volume1 > Status of volume: volume1 > Gluster process TCP Port RDMA Port OnlinePid>------------------------------------------------------------------------------> Brick gluster1:/var/gluster-storage 49152 0 Y3043> Brick gluster2:/var/gluster-storage N/A N/A NN/A> NFS Server on localhost 2049 0 Y2738> Self-heal Daemon on localhost N/A N/A Y2743> NFS Server on gluster1.mgr.example.com 2049 0 Y3026> Self-heal Daemon on gluster1.mgr.example.co > m N/A N/A Y3034> > Task Status of Volume volume1 >------------------------------------------------------------------------------> There are no active volume tasks > > root at gluster2:~# gluster peer status > Number of Peers: 1 > > Hostname: gluster1.mgr.example.com > Uuid: dff9118b-a2bd-4cd8-b562-0dfdbd2ea8a3 > State: Peer in Cluster (Connected) > root at gluster2:~# > root at gluster2:~# mount | grep gluster > gluster1:/volume1 on /mnt/glusterfs type fuse.glusterfs(rw,default_permissions,allow_other,max_read=131072)> root at gluster2:~# > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160118/28490665/attachment.html>
----- Original Message -----> From: "Mark Chaney" <mail at lists.macscr.com> > To: gluster-users at gluster.org > Sent: Monday, January 18, 2016 11:21:18 AM > Subject: [Gluster-users] are they no longer syncing? > > I have a two node cluster setup with iscsi using the image files that > are stored on the gluster cluster as LUNs. They do appear to be syncing, > but I have a few questions and I appreciate any help you can give me. > Thanks for your time! > > 1) Why does the second brick show as N for online?'N' means that the second brick is not online. Running 'gluster volume start <volname> force' should bring the brick up.> 2) Why is the healer daemon shown as NA? How can I correct that if it > needs to be corrected?Self-heal daemon status on both gluster1 and gluster2 is shown online ( Y ). It doesn't need to be corrected.> 3) Should i really be mounting the gluster volumes on each gluster node > for iscsi access or should i be accessing /var/gluster-storage directly? > 4) If i only have about 72GB of files stored in gluster, why is each > gluster host about 155GB? Are their duplicates stored somewhere and why? > > root at gluster1:~# gluster volume status volume1 > Status of volume: volume1 > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------------------------ > Brick gluster1:/var/gluster-storage 49152 0 Y > 3043 > Brick gluster2:/var/gluster-storage N/A N/A N > N/A > NFS Server on localhost 2049 0 Y > 3026 > Self-heal Daemon on localhost N/A N/A Y > 3034 > NFS Server on gluster2 2049 0 Y > 2738 > Self-heal Daemon on gluster2 N/A N/A Y > 2743 > > Task Status of Volume volume1 > ------------------------------------------------------------------------------ > There are no active volume tasks > > root at gluster1:~# gluster peer status > Number of Peers: 1 > > Hostname: gluster2 > Uuid: abe7ee21-bea9-424f-ac5c-694bdd989d6b > State: Peer in Cluster (Connected) > root at gluster1:~# > root at gluster1:~# mount | grep gluster > gluster1:/volume1 on /mnt/glusterfs type fuse.glusterfs > (rw,default_permissions,allow_other,max_read=131072) > > > root at gluster2:~# gluster volume status volume1 > Status of volume: volume1 > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------------------------ > Brick gluster1:/var/gluster-storage 49152 0 Y > 3043 > Brick gluster2:/var/gluster-storage N/A N/A N > N/A > NFS Server on localhost 2049 0 Y > 2738 > Self-heal Daemon on localhost N/A N/A Y > 2743 > NFS Server on gluster1.mgr.example.com 2049 0 Y > 3026 > Self-heal Daemon on gluster1.mgr.example.co > m N/A N/A Y > 3034 > > Task Status of Volume volume1 > ------------------------------------------------------------------------------ > There are no active volume tasks > > root at gluster2:~# gluster peer status > Number of Peers: 1 > > Hostname: gluster1.mgr.example.com > Uuid: dff9118b-a2bd-4cd8-b562-0dfdbd2ea8a3 > State: Peer in Cluster (Connected) > root at gluster2:~# > root at gluster2:~# mount | grep gluster > gluster1:/volume1 on /mnt/glusterfs type fuse.glusterfs > (rw,default_permissions,allow_other,max_read=131072) > root at gluster2:~# > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users >-- Thanks, Anuradha.
On 01/18/2016 12:51 AM, Mark Chaney wrote:> I have a two node cluster setup with iscsi using the image files that > are stored on the gluster cluster as LUNs. They do appear to be syncing, > but I have a few questions and I appreciate any help you can give me. > Thanks for your time! > > 1) Why does the second brick show as N for online? > 2) Why is the healer daemon shown as NA? How can I correct that if it > needs to be corrected? > 3) Should i really be mounting the gluster volumes on each gluster node > for iscsi access or should i be accessing /var/gluster-storage directly?You would need to mount volumes on each gluster node. Accessing bricks directly is not recommended.> 4) If i only have about 72GB of files stored in gluster, why is each > gluster host about 155GB? Are their duplicates stored somewhere and why? >How are you measuring the brick utilization? Are you measuring it on the bricks or through a gluster mount? Regards, Vijay