Displaying 20 results from an estimated 10000 matches similar to: "which components needs ssh keys?"
2018 Jan 03
0
which components needs ssh keys?
Only Geo-replication uses SSH since it is between two Clusters. All
other features are limited to single Cluster/Volume, so communications
happens via Glusterd(Port tcp/24007 and brick ports(tcp/47152-47251))
On Wednesday 03 January 2018 03:29 PM, lejeczek wrote:
> hi everyone
>
> I think geo-repl needs ssh and keys in order to work, but does
> anything else? Self-heal perhaps?
2018 Jan 05
1
which components needs ssh keys?
On Wed, Jan 3, 2018, at 3:23 AM, Aravinda wrote:
> Only Geo-replication uses SSH since it is between two Clusters. All
> other features are limited to single Cluster/Volume, so communications
> happens via Glusterd(Port tcp/24007 and brick ports(tcp/47152-47251))
Have we deprecated SSL/TLS for the local I/O and management paths? The code's still there, and I think I've even
2018 Apr 17
2
Bitrot - Restoring bad file
Hi,
I have a question regarding bitrot detection.
Following the RedHat manual (https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/bitrot-restore_corrupt_file) I am trying out bad-file-restoration after bitrot.
"gluster volume bitrot VOLNAME status" gets me the GFIDs that are corrupt and on which Host this happens.
As far as I can tell
2018 Apr 18
0
Bitrot - Restoring bad file
On 04/17/2018 06:25 PM, Omar Kohl wrote:
> Hi,
>
> I have a question regarding bitrot detection.
>
> Following the RedHat manual (https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/bitrot-restore_corrupt_file) I am trying out bad-file-restoration after bitrot.
>
> "gluster volume bitrot VOLNAME status" gets me the
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Yes but I want to add.
Is it the same logic?
---
Gilberto Nunes Ferreira
+55 (47) 99676-7530
Proxmox VE
VinChin Backup & Restore
Em ter., 5 de nov. de 2024, 14:09, Aravinda <aravinda at kadalu.tech> escreveu:
> Hello Gilberto,
>
> You can create a Arbiter volume using three bricks. Two of them will be
> data bricks and one will be Arbiter brick.
>
> gluster volume
2006 Nov 18
1
deriv when one term is indexed
Hi,
I'm fitting a standard nonlinear model to the luminances measured
from the red, green and blue guns of a TV display, using nls.
The call is:
dd.nls <- nls(Lum ~ Blev + beta[Gun] * GL^gamm,
data = dd, start = st)
where st was initally estimated using optim()
st
$Blev
[1] -0.06551802
$beta
[1] 1.509686e-05 4.555250e-05 7.322720e-06
$gamm
[1] 2.511870
This works fine but I
2005 May 28
3
Dovecot auth process died because of Socket operation on non-socket
Hi,
I'm having problems setting up a dovecot1.0 stable on a Debian-amd64
server. The process dies with the following message:
May 28 15:13:46 rouge dovecot: Dovecot v1.0-stable starting up
May 28 15:13:47 rouge dovecot: Login process died too early - shutting down
May 28 15:13:47 rouge dovecot: pop3-login: fd_send(-1) failed: Socket
operation on non-socket
May 28 15:13:47 rouge dovecot:
2006 Dec 23
2
rc15 errors
Hi,
Since I installed rc15 I'm seing the following errors in logs of a
server with several hundreds of pop3 and imap users. All the indexes
have been erased and recreated after installation.
Dec 21 21:06:46 rouge dovecot: pop3-login: inotify_init() failed: Too
many open files
The maildir mailboxes are on NFS, but the indexes are local. What should
be done to correct this error ?
These two
2007 Mar 05
2
imap core dump with rc25
Hi,
I had a core dump while using rc25. Here are the backtraces:
Mar 5 00:52:31 rouge dovecot: IMAP(XXXXXX): Maildir
/home/XXXXXX/Mail/.Sent sync: UID < next_uid (1 < 2, file =
1173024945.P2421Q0M874108.rouge:2,S)
Mar 5 00:52:31 rouge dovecot: IMAP(XXXXXX): file client.c: line 401
(_client_input): assertion failed: (!client->handling_input)
Mar 5 00:52:31 rouge dovecot: child 29120
2007 Jan 26
3
imap-login crash with RC19
Hi Timo,
Using RC19, I've have the following crash. If there was a core file,
I've got no idea where it's gone...
Jan 25 10:35:10 rouge dovecot: imap-login: file client.c: line 528
(client_unref): assertion failed: (client->refcount > 0)
Jan 25 10:35:10 rouge dovecot: child 25498 (login) killed with signal 6
Best regards,
--
Nico
On r?alise qu'une femme est de la dynamite
2017 Jun 23
2
seeding my georeplication
I have a ~600tb distributed gluster volume that I want to start using geo
replication on.
The current volume is on 6 100tb bricks on 2 servers
My plan is:
1) copy each of the bricks to a new arrays on the servers locally
2) move the new arrays to the new servers
3) create the volume on the new servers using the arrays
4) fix the layout on the new volume
5) start georeplication (which should be
2023 Nov 03
1
Gluster Geo replication
While creating the Geo-replication session it mounts the secondary Volume to see the available size. To mount the secondary volume in Primary, port 24007 and 49152-49664 of the secondary volume needs to be accessible from the Primary (Only in the node from where the Geo-rep create command is executed). This need to be changed to use SSH(bug). Alternatively use georep setup tool
2017 Oct 25
3
Gluster Health Report tool
Hi,
We started a new project to identify issues/misconfigurations in
Gluster nodes. This project is very young and not yet ready for
Production use, Feedback on the existing reports and ideas for more
Reports are welcome.
This tool needs to run in every Gluster node to detect the local
issues (Example: Parsing log files, checking disk space etc) in each
Nodes. But some of the reports use Gluster
2017 Aug 08
2
How to delete geo-replication session?
Do you see any session listed when Geo-replication status command is
run(without any volume name)
gluster volume geo-replication status
Volume stop force should work even if Geo-replication session exists.
From the error it looks like node "arbiternode.domain.tld" in Master
cluster is down or not reachable.
regards
Aravinda VK
On 08/07/2017 10:01 PM, mabi wrote:
> Hi,
>
2018 Jan 09
2
Bricks to sub-volume mapping
But do we store this information somewhere as part of gluster metadata or something...
Thanks and Regards,
--Anand
Extn : 6974
Mobile : 91 9552527199, 91 9850160173
From: Aravinda [mailto:avishwan at redhat.com]
Sent: 09 January 2018 12:31
To: Anand Malagi <amalagi at commvault.com>; gluster-users at gluster.org
Subject: Re: [Gluster-users] Bricks to sub-volume mapping
First 6 bricks
2017 Dec 21
1
seeding my georeplication
Thanks for your response (6 months ago!) but I have only just got around to
following up on this.
Unfortunately, I had already copied and shipped the data to the second
datacenter before copying the GFIDs so I already stumbled before the first
hurdle!
I have been using the scripts in the extras/geo-rep provided for an earlier
version upgrade. With a bit of tinkering, these have given me a file
2017 Aug 08
1
How to delete geo-replication session?
Sorry I missed your previous mail.
Please perform the following steps once a new node is added
- Run gsec create command again
gluster system:: execute gsec_create
- Run Geo-rep create command with force and run start force
gluster volume geo-replication <mastervol> <slavehost>::<slavevol>
create push-pem force
gluster volume geo-replication <mastervol>
2013 Oct 31
1
changing volume from Distributed-Replicate to Distributed
hi all,
as the title says - i'm looking to change a volume from dist/repl -> dist.
we're currently running 3.2.7. a few of questions for you gurus out there:
- is this possible to do on 3.2.7?
- is this possible to do with 3.4.1? (would involve upgrade)
- are there any pitfalls i should be aware of?
many thanks in advance,
regards,
paul
-------------- next part --------------
An
2007 Jan 23
2
imap core dump with rc18
Hi,
I had a core dump while using rc18. Here are the backtraces:
Jan 23 01:01:44 rouge dovecot: IMAP(user): file mail-index-view.c: line
386 (_view_lookup_uid_range): assertion failed: (*last_seq_r >=
*first_seq_r)
Jan 23 01:01:44 rouge dovecot: IMAP(user): Raw backtrace: [0x47f25b00000000]
Jan 23 01:01:44 rouge dovecot: child 24319 (imap) killed with signal 6
Core was generated by
2007 Apr 19
1
fs quota plugin and NFS
Hi,
I'm trying to use the Dovecot v1 fs quota plugin. The server uses NFS
mounted volumes for INBOX and other maildir folders. The /usr/bin/quota
command is working seamlessly but I get errors with the quota plugin,
which gives the following logs:
Apr 19 17:46:15 rouge dovecot: IMAP(xyxyxyx): quotactl(Q_GETQUOTA,
nfs.xxx.yyy.org:/home) failed: No such file or directory
Apr 19 17:46:18 rouge