Displaying 20 results from an estimated 2000 matches similar to: "Gluster Health Report tool"
2017 Oct 25
2
[Gluster-devel] Gluster Health Report tool
Hi Aravinda,
Very nice initiative, thank you very much! As as small recommendation it would be nice to have a "nagios/icinga" mode, maybe through a "-n" parameter which will do the health check and output the status ina nagios/icinga compatible format. As such this tool could be directly used by nagios for monitoring.
Best,
M.
> -------- Original Message --------
>
2018 Jan 09
0
Bricks to sub-volume mapping
No, we don't store the information separately. But it can be easily
predictable from the Volume Info.
For example, in the below Volume info, it shows "Number of Bricks" in
the following format,
??? Number of Subvols x (Number of Data bricks + Number of Redundancy
bricks) = Total Bricks
Note: Sub volumes are predictable without storing it as separate info
since we do not have
2017 Oct 25
0
[Gluster-devel] Gluster Health Report tool
Hi,
since people are suggesting nagios then I can't resist suggesting exporting
the metrics in the prometheus format,
or at least making the project into a library so
https://github.com/prometheus/client_python could be used to export the
prometheus metrics.
There has been an attempt at https://github.com/ofesseler/gluster_exporter
but it is not maintained anymore.
Cheers,
Marcin
On Wed,
2018 Jan 09
2
Bricks to sub-volume mapping
But do we store this information somewhere as part of gluster metadata or something...
Thanks and Regards,
--Anand
Extn : 6974
Mobile : 91 9552527199, 91 9850160173
From: Aravinda [mailto:avishwan at redhat.com]
Sent: 09 January 2018 12:31
To: Anand Malagi <amalagi at commvault.com>; gluster-users at gluster.org
Subject: Re: [Gluster-users] Bricks to sub-volume mapping
First 6 bricks
2017 Jun 23
2
seeding my georeplication
I have a ~600tb distributed gluster volume that I want to start using geo
replication on.
The current volume is on 6 100tb bricks on 2 servers
My plan is:
1) copy each of the bricks to a new arrays on the servers locally
2) move the new arrays to the new servers
3) create the volume on the new servers using the arrays
4) fix the layout on the new volume
5) start georeplication (which should be
2017 Aug 08
1
How to delete geo-replication session?
Sorry I missed your previous mail.
Please perform the following steps once a new node is added
- Run gsec create command again
gluster system:: execute gsec_create
- Run Geo-rep create command with force and run start force
gluster volume geo-replication <mastervol> <slavehost>::<slavevol>
create push-pem force
gluster volume geo-replication <mastervol>
2017 Aug 08
2
How to delete geo-replication session?
Do you see any session listed when Geo-replication status command is
run(without any volume name)
gluster volume geo-replication status
Volume stop force should work even if Geo-replication session exists.
From the error it looks like node "arbiternode.domain.tld" in Master
cluster is down or not reachable.
regards
Aravinda VK
On 08/07/2017 10:01 PM, mabi wrote:
> Hi,
>
2017 Jun 26
1
"Rotating" .glusterfs/changelogs
Hello all,
I'm trying to find a way to rotate the metadata changelogs.
I've so far learned (by ndevos in #gluster) that changelog is needed
for certain services, among those, georeplication, but not entirely
sure about the extent.
Is there a way to rotate these logs so that it takes up less space?
This is not an entirely critical issue, but it seems kinda silly when
we have a 3 GB volume
2018 Jan 09
2
Bricks to sub-volume mapping
Hi Team,
Please let me know how I can know which bricks are part of which sub-volumes in case of disperse volume, for example in below volume has two sub-volumes :
Type: Distributed-Disperse
Volume ID: 6dc8ced8-27aa-4481-bfe8-057133c31d0b
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (4 + 2) = 12
Transport-type: tcp
Bricks:
Brick1: pdchyperscale1sds:/ws/disk1/ws_brick
Brick2:
2017 Aug 08
0
How to delete geo-replication session?
When I run the "gluster volume geo-replication status" I see my geo replication session correctly including the volume name under the "VOL" column. I see my two nodes (node1 and node2) but not arbiternode as I have added it later after setting up geo-replication. For more details have a quick look at my previous post here:
2018 Apr 17
2
Bitrot - Restoring bad file
Hi,
I have a question regarding bitrot detection.
Following the RedHat manual (https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/bitrot-restore_corrupt_file) I am trying out bad-file-restoration after bitrot.
"gluster volume bitrot VOLNAME status" gets me the GFIDs that are corrupt and on which Host this happens.
As far as I can tell
2018 Jan 09
0
Bricks to sub-volume mapping
First 6 bricks belong to First sub volume and next 6 bricks belong to
second.
On Tuesday 09 January 2018 12:11 PM, Anand Malagi wrote:
>
> Hi Team,
>
> Please let me know how I can know which bricks are part of which
> sub-volumes in case of disperse volume, for example in below volume
> has two sub-volumes :
>
> Type: Distributed-Disperse
>
> Volume ID:
2017 Dec 21
1
seeding my georeplication
Thanks for your response (6 months ago!) but I have only just got around to
following up on this.
Unfortunately, I had already copied and shipped the data to the second
datacenter before copying the GFIDs so I already stumbled before the first
hurdle!
I have been using the scripts in the extras/geo-rep provided for an earlier
version upgrade. With a bit of tinkering, these have given me a file
2018 Apr 18
0
Bitrot - Restoring bad file
On 04/17/2018 06:25 PM, Omar Kohl wrote:
> Hi,
>
> I have a question regarding bitrot detection.
>
> Following the RedHat manual (https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/bitrot-restore_corrupt_file) I am trying out bad-file-restoration after bitrot.
>
> "gluster volume bitrot VOLNAME status" gets me the
2017 Nov 25
1
How to read geo replication timestamps from logs
Folks, need help interpreting this message from my geo rep logs for my
volume mojo.
ssh%3A%2F%2Froot%40173.173.241.2%3Agluster%3A%2F%2F127.0.
0.1%3Amojo-remote.log:[2017-11-22 00:59:40.610574] I
[master(/bricks/lsi/mojo):1125:crawl] _GMaster: slave's time: (1511312352,
0)
The epoch of 1511312352 is Tuesday, November 21, 2017 12:59:12 AM GMT.
The clocks are using the same ntp stratum and
2017 Sep 17
2
georeplication sync deamon
hi all,
I want to know some more detail about glusterfs georeplication, more about
syncdeamon, if 'file A' was mirorred in slave volume , a change happen to
'file A', then how the syncdeamon act?
1. transfer the whole 'file A' to slave
2. transfer the changes of file A to slave
thx lot
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2023 Nov 03
1
Gluster Geo replication
While creating the Geo-replication session it mounts the secondary Volume to see the available size. To mount the secondary volume in Primary, port 24007 and 49152-49664 of the secondary volume needs to be accessible from the Primary (Only in the node from where the Geo-rep create command is executed). This need to be changed to use SSH(bug). Alternatively use georep setup tool
2018 Jan 03
2
which components needs ssh keys?
hi everyone
I think geo-repl needs ssh and keys in order to work, but
does anything else? Self-heal perhaps?
Reason I ask is that I had some old keys gluster put in when
I had geo-repl which I removed and self-heal now rouge,
cannot get statistics:
..
Gathering crawl statistics on volume WORK has been
unsuccessful on bricks that are down. Please check if all
brick processes are running.
..
2023 Oct 31
2
Gluster Geo replication
Hi All,
What are the ports needed to be opened for Gluster Geo replication ? We
have a very closed setup, I could gather below info, does all of these
ports need to be open on master and slave for inter communication or just
22 would work since it's using the rsync over ssh for actual data push ?
*?* *Port 22 (TCP):* Used by SSH for secure data communication in
Geo-replication.
*?* *Port 24007
2017 Aug 27
1
Using glusterfind to backup to rsync.net?
Hi Milind,
I am thinking of doing backups from my Gluster volume at home. Would
glusterfind make that straight forward to do? My plan would be to have a
cron-job that runs every night and copies documents and pictures to
rsync.net. For me it is not a lot of data, and I think a plain recursive
scanning rsync might work sufficiently, but hooking it up with
glusterfind would be more optimized.
Can