similar to: health monitoring of replicated volume

Displaying 20 results from an estimated 10000 matches similar to: "health monitoring of replicated volume"

2013 Jun 26
2
HI Guys
Hi ,? I recenlty configured the 2 node replica glusterfs , and I am having couple of issues? 1. As soon ?as a I reboot the node2 , the glusterfs on node1 is not available but when I reboot/shutdown node1 the glusterfs is available on node 0 , so please let me know if you guys have encountered the same issue 2. I am not able to mount the glusterfs mount at the time of reboot I had to do manually
2010 May 31
2
DHT translator problem
Hello, I am trying to configure a volume using DHT, however after I mount it, the mount point looks rather strange and when I try to do 'ls' on it I get: ls: /mnt/gtest: Stale NFS file handle I can create files and dirs in the mount point, I can list them but I cant list the mount point itself. Example: the folume is mounted on /mnt/gtest [root at storage2]# ls -l /mnt/ ?---------
2011 Aug 17
1
cluster.min-free-disk separate for each, brick
On 15/08/11 20:00, gluster-users-request at gluster.org wrote: > Message: 1 > Date: Sun, 14 Aug 2011 23:24:46 +0300 > From: "Deyan Chepishev - SuperHosting.BG"<dchepishev at superhosting.bg> > Subject: [Gluster-users] cluster.min-free-disk separate for each > brick > To: gluster-users at gluster.org > Message-ID:<4E482F0E.3030604 at superhosting.bg>
2013 Nov 21
3
Sync data
I guys! i have 2 servers in replicate mode, the node 1 has all data, and the cluster 2 is empty. I created a volume (gv0) and start it. Now, how can I synchronize all files on the node 1 by the node 2 ? Steps that I followed: gluster peer probe node1 gluster volume create gv0 replica 2 node1:/data node2:/data gluster volume gvo start thanks! -------------- next part -------------- An HTML
2017 Aug 06
2
State: Peer Rejected (Connected)
Hi, I have a 3 nodes replica (including arbiter) volume with GlusterFS 3.8.11 and this night one of my nodes (node1) had an out of memory for some unknown reason and as such the Linux OOM killer has killed the glusterd and glusterfs process. I restarted the glusterd process but now that node is in "Peer Rejected" state from the other nodes and from itself it rejects the two other nodes
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
To quickly resume my current situation: on node2 I have found the following file xattrop/indices file which matches the GFID of the "heal info" command (below is there output of "ls -lai": 2798404 ---------- 2 root root 0 Apr 28 22:51 /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397 As you can see this file has inode number 2798404, so I ran
2017 Aug 06
1
State: Peer Rejected (Connected)
Hi Ji-Hyeon, Thanks to your help I could find out the problematic file. This would be the quota file of my volume it has a different checksum on node1 whereas node2 and arbiternode have the same checksum. This is expected as I had issues which my quota file and had to fix it manually with a script (more details on this mailing list in a previous post) and I only did that on node1. So what I now
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
Now I understand what you mean the the "-samefile" parameter of "find". As requested I have now run the following command on all 3 nodes with the ouput of all 3 nodes below: sudo find /data/myvolume/brick -samefile /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 -ls node1: 8404683 0 lrwxrwxrwx 1 root root 66 Jul 27 15:43
2024 Jan 01
1
Replacing Failed Server Failing
Hi All (and Happy New Year), We had to replace one of our Gluster Servers in our Trusted Pool this week (node1). The new server is now built, with empty folders for the bricks, peered to the old Nodes (node2 & node3). We basically followed this guide: https://docs.rackspace.com/docs/recover-from-a-failed-server-in-a-glusterfs-array We are using the same/old IP address. So when we try
2017 Jul 31
2
Possible stale .glusterfs/indices/xattrop file?
I did a find on this inode number and I could find the file but only on node1 (nothing on node2 and the new arbiternode). Here is an ls -lai of the file itself on node1: -rw-r--r-- 1 www-data www-data 32 Jun 19 17:42 fileKey As you can see it is a 32 bytes file and as you suggested I ran a "stat" on this very same file through a glusterfs mount (using fuse) but unfortunately nothing
2017 Aug 06
0
State: Peer Rejected (Connected)
On 2017? 08? 06? 15:59, mabi wrote: > Hi, > > I have a 3 nodes replica (including arbiter) volume with GlusterFS > 3.8.11 and this night one of my nodes (node1) had an out of memory for > some unknown reason and as such the Linux OOM killer has killed the > glusterd and glusterfs process. I restarted the glusterd process but > now that node is in "Peer Rejected"
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
Hi Kotresh, thanks for your repsonse... answers inside... best regards Dietmar Am 13.03.2018 um 06:38 schrieb Kotresh Hiremath Ravishankar: > Hi Dietmar, > > I am trying to understand the problem and have few questions. > > 1. Is trashcan enabled only on master volume? no, trashcan is also enabled on slave. settings are the same as on master but trashcan on slave is complete
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 02:00 PM, mabi wrote: > To quickly resume my current situation: > > on node2 I have found the following file xattrop/indices file which > matches the GFID of the "heal info" command (below is there output of > "ls -lai": > > 2798404 ---------- 2 root root 0 Apr 28 22:51 >
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 12:20 PM, mabi wrote: > I did a find on this inode number and I could find the file but only > on node1 (nothing on node2 and the new arbiternode). Here is an ls > -lai of the file itself on node1: Sorry I don't understand, isn't that (XFS) inode number specific to node2's brick? If you want to use the same command, maybe you should try `find
2017 Jul 31
0
Possible stale .glusterfs/indices/xattrop file?
On 07/31/2017 02:33 PM, mabi wrote: > Now I understand what you mean the the "-samefile" parameter of > "find". As requested I have now run the following command on all 3 > nodes with the ouput of all 3 nodes below: > > sudo find /data/myvolume/brick -samefile > /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 > -ls > >
2023 Jan 25
1
Regarding Glusterfs file locking
Hi, Greetings of the day, Our configuration is like: We have installed both glusterFS server and GlusterFS client on node1 as well as node2. We have mounted node1 volume to both nodes. Our use case is : >From glusterFS node 1, we have to take an exclusive lock and open a file (which is a shared file between both the nodes) and we should write/read in that file. >From glusterFS node 2, we
2017 Oct 24
2
gluster tiering errors
Milind - Thank you for the response.. >> What are the high and low watermarks for the tier set at ? # gluster volume get <vol> cluster.watermark-hi Option Value ------ ----- cluster.watermark-hi 90 # gluster volume get <vol> cluster.watermark-low Option
2017 Oct 27
0
gluster tiering errors
Herb, I'm trying to weed out issues here. So, I can see quota turned *on* and would like you to check the quota settings and test to see system behavior *if quota is turned off*. Although the file size that failed migration was 29K, I'm being a bit paranoid while weeding out issues. Are you still facing tiering errors ? I can see your response to Alex with the disk space consumption and
2017 Jul 30
2
Possible stale .glusterfs/indices/xattrop file?
Hi Ravi, Thanks for your hints. Below you will find the answer to your questions. First I tried to start the healing process by running: gluster volume heal myvolume and then as you suggested watch the output of the glustershd.log file but nothing appeared in that log file after running the above command. I checked the files which need to be healing using the "heal <volume> info"
2013 Dec 17
1
Project pre planning
Hello GlusterFS users, can anybody give me please his opinion about the following facts and questions: 4 storage server with 16 SATA bays, connected by GigE: Q1: Volume will be set up as distributed-replicated. Maildir, FTP Dir, htdocs, file store directory => as sub dir's in one big GlusterVolume or each dir in it's own GlusterVolume? Q2: Set up the bricks as a collection of