Displaying 20 results from an estimated 8000 matches similar to: "emptying the trash directory"
2017 Jun 20
0
trash can feature, crashed???
On Tue, 2017-06-20 at 08:52 -0400, Ludwig Gamache wrote:
> All,
>
> I currently have 2 bricks running Gluster 3.10.1. This is a Centos installation. On Friday last
> week, I enabled the trashcan feature on one of my volumes:
> gluster volume set date01 features.trash on
I think you misspelled the volume name. Is it data01 or date01?
> I also limited the max file size to 500MB:
2017 Jun 20
2
trash can feature, crashed???
All,
I currently have 2 bricks running Gluster 3.10.1. This is a Centos
installation. On Friday last week, I enabled the trashcan feature on one of
my volumes:
gluster volume set date01 features.trash on
I also limited the max file size to 500MB:
gluster volume set data01 features.trash-max-filesize 500MB
3 hours after that I enabled this, this specific gluster volume went down:
[2017-06-16
2017 Jun 22
0
remounting volumes, is there an easier way
On Tue, Jun 20, 2017 at 7:32 PM, Ludwig Gamache <ludwig at elementai.com>
wrote:
> All,
>
> Over the week-end, one of my volume became unavailable. All clients could
> not access their mount points. On some of the clients, I had user processes
> that we using these mount points. So, I could not do a umount/mount without
> killing these processes.
>
> I also noticed
2017 Jun 20
2
remounting volumes, is there an easier way
All,
Over the week-end, one of my volume became unavailable. All clients could
not access their mount points. On some of the clients, I had user processes
that we using these mount points. So, I could not do a umount/mount without
killing these processes.
I also noticed that when I restarted the volume, the port changed on the
server. So, clients that were still using the previous TCP/port could
2017 Sep 25
1
Adding bricks to an existing installation.
Sharding is not enabled.
Ludwig
On Mon, Sep 25, 2017 at 2:34 PM, <lemonnierk at ulrar.net> wrote:
> Do you have sharding enabled ? If yes, don't do it.
> If no I'll let someone who knows better answer you :)
>
> On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote:
> > All,
> >
> > We currently have a Gluster installation which is made of 2
2017 Jun 15
1
Interesting split-brain...
Can you please explain How we ended up in this scenario. I think that
will help to understand more about this scenarios and why gluster
recommend replica 3 or arbiter volume.
Regards
Rafi KC
On 06/15/2017 10:46 AM, Karthik Subrahmanya wrote:
> Hi Ludwig,
>
> There is no way to resolve gfid split-brains with type mismatch. You
> have to do it manually by following the steps in [1].
2017 Jun 15
0
Interesting split-brain...
Hi Ludwig,
There is no way to resolve gfid split-brains with type mismatch. You have
to do it manually by following the steps in [1].
In case of type mismatch it is recommended to resolve it manually. But for
only gfid mismatch in 3.11 we have a way to
resolve it by using the *favorite-child-policy*.
Since the file is not important, you can go with deleting that.
[1]
2017 Jun 15
2
Interesting split-brain...
I am new to gluster but already like it. I did a maintenance last week
where I shutdown both nodes (one after each others). I had many files that
needed to be healed after that. Everything worked well, except for 1 file.
It is in split-brain, with 2 different GFID. I read the documentation but
it only covers the cases where the GFID is the same on both bricks. BTW, I
am running Gluster 3.10.
Here
2017 Oct 24
0
trying to add a 3rd peer
Are you shure about possibility to resolve all node names on all other nodes?
You need to use names used previously in Gluster - check their by ?gluster peer status? or ?gluster pool list?.
Regards,
Bartosz
> Wiadomo?? napisana przez Ludwig Gamache <ludwig at elementai.com> w dniu 24.10.2017, o godz. 03:13:
>
> All,
>
> I am trying to add a third peer to my gluster
2017 Sep 25
0
Adding bricks to an existing installation.
Do you have sharding enabled ? If yes, don't do it.
If no I'll let someone who knows better answer you :)
On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote:
> All,
>
> We currently have a Gluster installation which is made of 2 servers. Each
> server has 10 drives on ZFS. And I have a gluster mirror between these 2.
>
> The current config looks like:
>
2017 Sep 25
2
Adding bricks to an existing installation.
All,
We currently have a Gluster installation which is made of 2 servers. Each
server has 10 drives on ZFS. And I have a gluster mirror between these 2.
The current config looks like:
SERVER A-BRICK 1 replicated to SERVER B-BRICK 1
I now need to add more space and a third server. Before I do the changes, I
want to know if this is a supported config. By adding a third server, I
simply want to
2017 Oct 24
2
trying to add a 3rd peer
All,
I am trying to add a third peer to my gluster install. The first 2 nodes
are running since many months and have gluster 3.10.3-1.
I recently installed the 3rd node and gluster 3.10.6-1. I was able to start
the gluster daemon on it. After, I tried to add the peer from one of the 2
previous server (gluster peer probe IPADDRESS).
That first peer started the communication with the 3rd peer. At
2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
Hi Dietmar,
I am trying to understand the problem and have few questions.
1. Is trashcan enabled only on master volume?
2. Does the 'rm -rf' done on master volume synced to slave ?
3. If trashcan is disabled, the issue goes away?
The geo-rep error just says the it failed to create the directory
"Oracle_VM_VirtualBox_Extension" on slave.
Usually this would be because of gfid
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
Hi Kotresh,
thanks for your repsonse...
answers inside...
best regards
Dietmar
Am 13.03.2018 um 06:38 schrieb Kotresh Hiremath Ravishankar:
> Hi Dietmar,
>
> I am trying to understand the problem and have few questions.
>
> 1. Is trashcan enabled only on master volume?
no, trashcan is also enabled on slave. settings are the same as on
master but trashcan on slave is complete
2017 Nov 01
1
Announcing Gluster release 3.10.7 (Long Term Maintenance)
The Gluster community is pleased to announce the release of Gluster
3.10.7 (packages available at [1]).
Release notes for the release can be found at [2].
We are still working on a further fix for the corruption issue when
sharded volumes are rebalanced, details as below.
* Expanding a gluster volume that is sharded may cause file corruption
- Sharded volumes are typically used for VM
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
Hello,
in regard to
https://bugzilla.redhat.com/show_bug.cgi?id=1434066
i have been faced to another issue when using the trashcan feature on a
dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4)
for e.g. removing an entire directory with subfolders :
tron at gl-node1:/myvol-1/test1/b1$ rm -rf *
afterwards listing files in the trashcan :
tron at gl-node1:/myvol-1/test1$
2017 Jun 28
2
setting gfid on .trashcan/... failed - total outage
Hello,
recently we had two times a partial gluster outage followed by a total
outage of all four nodes. Looking into the gluster mailing list i found
a very similar case in
http://lists.gluster.org/pipermail/gluster-users/2016-June/027124.html
but i'm not sure if this issue is fixed...
even this outage happened on glusterfs 3.7.18 which gets no more updates
since ~.20 i would kindly ask
2017 Jun 29
0
setting gfid on .trashcan/... failed - total outage
On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote:
> Hello,
>
> recently we had two times a partial gluster outage followed by a total?
> outage of all four nodes. Looking into the gluster mailing list i found?
> a very similar case in?
> http://lists.gluster.org/pipermail/gluster-users/2016-June/027124.html
If you are talking about a crash happening on bricks, were you
2017 Jun 29
1
setting gfid on .trashcan/... failed - total outage
Hello Anoop,
thank you for your reply....
answers inside...
best regards
Dietmar
On 29.06.2017 10:48, Anoop C S wrote:
> On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote:
>> Hello,
>>
>> recently we had two times a partial gluster outage followed by a total
>> outage of all four nodes. Looking into the gluster mailing list i found
>> a very similar case
1999 Jun 15
1
need undelete function
> ps: my little suggestion: if we add some parameters at smb.conf like
> protected dir = /home/share, /home/user1 ;
> trashcan dir = /smbtrash ;
Such an addon to samba would make me weep tears of joy. The trashcan is a
"security blanket" feature that Windows/MacOS/OS2/BeOS users now take for
granted. Convincing them that this is now impossible because we've