similar to: Is it possible to install Gluster Object storage on top of Gluster 3.0.5?

Displaying 20 results from an estimated 20000 matches similar to: "Is it possible to install Gluster Object storage on top of Gluster 3.0.5?"

2011 Oct 31
2
Is it possible to install Gluster management console using manual install process?
Hi, Is it possible to install Gluster management console using manual install process? I mean the management console when installing using the ISO image; by doing the manual install from a base Centos server? Thanks, Xybrek
2011 Oct 20
1
What is the default root password for Gluster installed using ISO
Hi, I want to know what is the root/admin password for the shell/console of Gluster installed using ISO installer (this is version 3.0.5) when password: 'glusteradmin' won't work? If its really 'glusteradmin' then what is the username? Is it root or something else? Cheers.
2017 Jun 20
0
gluster peer probe failing
Hi, I am able to recreate the issue and here is my RCA. Maximum value i.e 32767 is being overflowed while doing manipulation on it and it was previously not taken care properly. Hence glusterd was crashing with SIGSEGV. Issue is being fixed with " https://bugzilla.redhat.com/show_bug.cgi?id=1454418" and being backported as well. Thanks Gaurav On Tue, Jun 20, 2017 at 6:43 AM, Gaurav
2017 Jun 20
1
gluster peer probe failing
Thanks Gaurav! 1. Any time estimation on to when this fix would be released? 2. Any recommended workaround? Best, Guy. From: Gaurav Yadav [mailto:gyadav at redhat.com] Sent: Tuesday, June 20, 2017 9:46 AM To: Guy Cukierman <guyc at elminda.com> Cc: Atin Mukherjee <amukherj at redhat.com>; gluster-users at gluster.org Subject: Re: [Gluster-users] gluster peer probe failing
2017 Jun 20
2
gluster peer probe failing
Hi, I have tried on my host by setting corresponding ports, but I didn't see the issue on my machine locally. However with the logs you have sent it is prety much clear issue is related to ports only. I will trying to reproduce on some other machine. Will update you as s0on as possible. Thanks Gaurav On Sun, Jun 18, 2017 at 12:37 PM, Guy Cukierman <guyc at elminda.com> wrote: >
2017 Jun 16
2
gluster peer probe failing
Could you please send me the output of command "sysctl net.ipv4.ip_local_reserved_ports". Apart from output of command please send the logs to look into the issue. Thanks Gaurav On Thu, Jun 15, 2017 at 4:28 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > +Gaurav, he is the author of the patch, can you please comment here? > > > On Thu, Jun 15, 2017 at 3:28
2017 Jun 15
0
gluster peer probe failing
+Gaurav, he is the author of the patch, can you please comment here? On Thu, Jun 15, 2017 at 3:28 PM, Guy Cukierman <guyc at elminda.com> wrote: > Thanks, but my current settings are: > > net.ipv4.ip_local_reserved_ports = 30000-32767 > > net.ipv4.ip_local_port_range = 32768 60999 > > meaning the reserved ports are already in the short int range, so maybe I >
2017 Jun 15
2
gluster peer probe failing
Thanks, but my current settings are: net.ipv4.ip_local_reserved_ports = 30000-32767 net.ipv4.ip_local_port_range = 32768 60999 meaning the reserved ports are already in the short int range, so maybe I misunderstood something? or is it a different issue? From: Atin Mukherjee [mailto:amukherj at redhat.com] Sent: Thursday, June 15, 2017 10:56 AM To: Guy Cukierman <guyc at elminda.com> Cc:
2017 Sep 08
1
Redis db permission issue while running GitLab in Kubernetes with Gluster
Getting this answer back on the list in case anyone else is trying to share storage. Thanks for the docs pointer, Tanner. -John On Thu, Sep 7, 2017 at 6:50 PM, Tanner Bruce <tanner.bruce at farmersedge.ca> wrote: > You can set a security context on your pod to set the guid as needed: > https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ > > > This
2017 Jun 18
0
gluster peer probe failing
Hi, Below please find the reserved ports and log, thanks. sysctl net.ipv4.ip_local_reserved_ports: net.ipv4.ip_local_reserved_ports = 30000-32767 glusterd.log: [2017-06-18 07:04:17.853162] I [MSGID: 106487] [glusterd-handler.c:1242:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req 192.168.1.17 24007 [2017-06-18 07:04:17.853237] D [MSGID: 0] [common-utils.c:3361:gf_is_local_addr]
2010 Sep 23
1
proposed new doco for "Gluster 3.1: Installing GlusterFS on OpenSolaris"
Hi all Reference: http://support.zresearch.com/community/documentation/index.php/Gluster_3.1:_Installing_GlusterFS_on_OpenSolaris I have found this guide to be too brief/terse, and have endeavoured to improve it via more of a recipie/howto approach - and possibly misunderstood the intent of the brief directions in the process. Please advise if there are any errors? Once the procedure is
2017 Sep 01
2
peer rejected but connected
Logs from newly added node helped me in RCA of the issue. Info file on node 10.5.6.17 consist of an additional property "tier-enabled" which is not present in info file from other 3 nodes, hence when gluster peer probe call is made, in order to maintain consistency across the cluster cksum is compared. In this case as both files are different leading to different cksum, causing state in
2017 Oct 04
0
Glusterd not working with systemd in redhat 7
On Wed, Oct 04, 2017 at 09:44:44AM +0000, ismael mondiu wrote: > Hello, > > I'd like to test if 3.10.6 version fixes the problem . I'm wondering which is the correct way to upgrade from 3.10.5 to 3.10.6. > > It's hard to find upgrade guides for a minor release. Can you help me please ? Packages for GlusterFS 3.10.6 are available in the testing repository of the
2017 Sep 13
3
one brick one volume process dies?
On 13/09/17 06:21, Gaurav Yadav wrote: > Please provide the output of gluster volume info, gluster > volume status and gluster peer status. > > Apart? from above info, please provide glusterd logs, > cmd_history.log. > > Thanks > Gaurav > > On Tue, Sep 12, 2017 at 2:22 PM, lejeczek > <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
2017 Oct 04
2
Glusterd not working with systemd in redhat 7
Hello, I'd like to test if 3.10.6 version fixes the problem . I'm wondering which is the correct way to upgrade from 3.10.5 to 3.10.6. It's hard to find upgrade guides for a minor release. Can you help me please ? Thanks in advance Ismael ________________________________ De : Atin Mukherjee <amukherj at redhat.com> Envoy? : dimanche 17 septembre 2017 14:56 ? : ismael
2017 Sep 17
2
Glusterd not working with systemd in redhat 7
The backport just got merged few minutes back and this fix should be available in next update of 3.10. On Fri, Sep 15, 2017 at 2:08 PM, ismael mondiu <mondiu at hotmail.com> wrote: > Hello Team, > > Do you know when the backport to 3.10 will be available ? > > Thanks > > > > > ------------------------------ > *De :* Atin Mukherjee <amukherj at
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log. On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote: > > > On 13/09/17 06:21, Gaurav Yadav wrote: > >> Please provide the output of gluster volume info, gluster volume status >> and gluster peer status. >> >> Apart from above info, please provide glusterd logs,
2017 Oct 04
2
Glusterd not working with systemd in redhat 7
Thanks Niels, We want to install it on redhat 7. We work on a secured environment with no internet access. We download the packages here https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.10/ and then, we push the package to the server and install them via rpm command . Do you think this is a correct way to upgrade gluster when working without internet access? Thanks in advance
2017 Aug 29
3
peer rejected but connected
hi fellas, same old same in log of the probing peer I see: ... 2017-08-29 13:36:16.882196] I [MSGID: 106493] [glusterd-handler.c:3020:__glusterd_handle_probe_query] 0-glusterd: Responded to priv.xx.xx.priv.xx.xx.x, op_ret: 0, op_errno: 0, ret: 0 [2017-08-29 13:36:16.904961] I [MSGID: 106490] [glusterd-handler.c:2606:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid:
2017 Sep 13
2
one brick one volume process dies?
Additionally the brick log file of the same brick would be required. Please look for if brick process went down or crashed. Doing a volume start force should resolve the issue. On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote: > Please send me the logs as well i.e glusterd.logs and cmd_history.log. > > > On Wed, Sep 13, 2017 at 1:45 PM, lejeczek