Displaying 14 results from an estimated 14 matches for "gamache".
2017 Jun 20
2
remounting volumes, is there an easier way
....
I also noticed that when I restarted the volume, the port changed on the
server. So, clients that were still using the previous TCP/port could not
reconnect.
I was wondering if there is a way to tell to the client that a specific
volume as a new port to connect to?
Regards,
Ludwig
--
Ludwig Gamache
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170620/32d335b0/attachment.html>
2017 Jun 22
0
remounting volumes, is there an easier way
On Tue, Jun 20, 2017 at 7:32 PM, Ludwig Gamache <ludwig at elementai.com>
wrote:
> All,
>
> Over the week-end, one of my volume became unavailable. All clients could
> not access their mount points. On some of the clients, I had user processes
> that we using these mount points. So, I could not do a umount/mount without
>...
2017 Sep 25
1
Adding bricks to an existing installation.
Sharding is not enabled.
Ludwig
On Mon, Sep 25, 2017 at 2:34 PM, <lemonnierk at ulrar.net> wrote:
> Do you have sharding enabled ? If yes, don't do it.
> If no I'll let someone who knows better answer you :)
>
> On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote:
> > All,
> >
> > We currently have a Gluster installation which is made of 2 servers. Each
> > server has 10 drives on ZFS. And I have a gluster mirror between these 2.
> >
> > The current config looks like:
> > SERVER A-BRICK 1 replicated to SERVER...
2017 Jun 16
1
emptying the trash directory
All,
I just enabled the trashcan feature on our volumes. It is working as
expected. However, I can't seem to find the rules to empty the trashcan. Is
there any automated process to do that? If so, what are the configuration
features?
Regards,
Ludwig
--
Ludwig Gamache
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170616/0f928ac6/attachment.html>
2017 Jun 15
1
Interesting split-brain...
...the *favorite-child-policy*.
> Since the file is not important, you can go with deleting that.
>
> [1]
> https://gluster.readthedocs.io/en/latest/Troubleshooting/split-brain/#fixing-directory-entry-split-brain
>
> HTH,
> Karthik
>
> On Thu, Jun 15, 2017 at 8:23 AM, Ludwig Gamache <ludwig at elementai.com
> <mailto:ludwig at elementai.com>> wrote:
>
> I am new to gluster but already like it. I did a maintenance last
> week where I shutdown both nodes (one after each others). I had
> many files that needed to be healed after that. Everyt...
2017 Jun 15
2
Interesting split-brain...
...0000
trusted.bit-rot.version=0x060000000000000059397acd0005dadd
trusted.gfid=0xa70ae9af887a4a37875f5c7c81ebc803
Any recommendation on how to recover from that? BTW, the file is not
important and I could easily get rid of it without impact. So, if this is
an easy solution...
Regards,
--
Ludwig Gamache
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170614/9df30fd2/attachment.html>
2017 Sep 25
0
Adding bricks to an existing installation.
Do you have sharding enabled ? If yes, don't do it.
If no I'll let someone who knows better answer you :)
On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote:
> All,
>
> We currently have a Gluster installation which is made of 2 servers. Each
> server has 10 drives on ZFS. And I have a gluster mirror between these 2.
>
> The current config looks like:
> SERVER A-BRICK 1 replicated to SERVER B-BRICK 1
>
> I now need t...
2017 Jun 15
0
Interesting split-brain...
...3.11 we have a way to
resolve it by using the *favorite-child-policy*.
Since the file is not important, you can go with deleting that.
[1]
https://gluster.readthedocs.io/en/latest/Troubleshooting/split-brain/#fixing-directory-entry-split-brain
HTH,
Karthik
On Thu, Jun 15, 2017 at 8:23 AM, Ludwig Gamache <ludwig at elementai.com>
wrote:
> I am new to gluster but already like it. I did a maintenance last week
> where I shutdown both nodes (one after each others). I had many files that
> needed to be healed after that. Everything worked well, except for 1 file.
> It is in split-bra...
2017 Sep 25
2
Adding bricks to an existing installation.
All,
We currently have a Gluster installation which is made of 2 servers. Each
server has 10 drives on ZFS. And I have a gluster mirror between these 2.
The current config looks like:
SERVER A-BRICK 1 replicated to SERVER B-BRICK 1
I now need to add more space and a third server. Before I do the changes, I
want to know if this is a supported config. By adding a third server, I
simply want to
2017 Jun 20
0
trash can feature, crashed???
On Tue, 2017-06-20 at 08:52 -0400, Ludwig Gamache wrote:
> All,
>
> I currently have 2 bricks running Gluster 3.10.1. This is a Centos installation. On Friday last
> week, I enabled the trashcan feature on one of my volumes:
> gluster volume set date01 features.trash on
I think you misspelled the volume name. Is it data01 or date0...
2017 Oct 24
0
trying to add a 3rd peer
Are you shure about possibility to resolve all node names on all other nodes?
You need to use names used previously in Gluster - check their by ?gluster peer status? or ?gluster pool list?.
Regards,
Bartosz
> Wiadomo?? napisana przez Ludwig Gamache <ludwig at elementai.com> w dniu 24.10.2017, o godz. 03:13:
>
> All,
>
> I am trying to add a third peer to my gluster install. The first 2 nodes are running since many months and have gluster 3.10.3-1.
>
> I recently installed the 3rd node and gluster 3.10.6-1. I was abl...
2017 Oct 24
2
trying to add a 3rd peer
All,
I am trying to add a third peer to my gluster install. The first 2 nodes
are running since many months and have gluster 3.10.3-1.
I recently installed the 3rd node and gluster 3.10.6-1. I was able to start
the gluster daemon on it. After, I tried to add the peer from one of the 2
previous server (gluster peer probe IPADDRESS).
That first peer started the communication with the 3rd peer. At
2017 Jun 20
2
trash can feature, crashed???
...40.293723] I [MSGID: 106132]
[glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already st
I now disabled the trashcan feature. I remounted the volumes on all clients
and it started to work.
Why would the trashcan feature have crashed/disabled my volume?
Regards,
Ludwig
--
Ludwig Gamache
IT Director - Element AI
4200 St-Laurent, suite 1200
514-704-0564
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170620/9cd4dd7c/attachment.html>
2017 Nov 01
1
Announcing Gluster release 3.10.7 (Long Term Maintenance)
The Gluster community is pleased to announce the release of Gluster
3.10.7 (packages available at [1]).
Release notes for the release can be found at [2].
We are still working on a further fix for the corruption issue when
sharded volumes are rebalanced, details as below.
* Expanding a gluster volume that is sharded may cause file corruption
- Sharded volumes are typically used for VM