Displaying 20 results from an estimated 1000 matches similar to: "setting gfid on .trashcan/... failed - total outage"
2017 Jun 29
0
setting gfid on .trashcan/... failed - total outage
On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote:
> Hello,
>
> recently we had two times a partial gluster outage followed by a total?
> outage of all four nodes. Looking into the gluster mailing list i found?
> a very similar case in?
> http://lists.gluster.org/pipermail/gluster-users/2016-June/027124.html
If you are talking about a crash happening on bricks, were you
2017 Jun 29
1
setting gfid on .trashcan/... failed - total outage
Hello Anoop,
thank you for your reply....
answers inside...
best regards
Dietmar
On 29.06.2017 10:48, Anoop C S wrote:
> On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote:
>> Hello,
>>
>> recently we had two times a partial gluster outage followed by a total
>> outage of all four nodes. Looking into the gluster mailing list i found
>> a very similar case
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
Hi Kotresh,
thanks for your repsonse...
answers inside...
best regards
Dietmar
Am 13.03.2018 um 06:38 schrieb Kotresh Hiremath Ravishankar:
> Hi Dietmar,
>
> I am trying to understand the problem and have few questions.
>
> 1. Is trashcan enabled only on master volume?
no, trashcan is also enabled on slave. settings are the same as on
master but trashcan on slave is complete
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
Hello,
in regard to
https://bugzilla.redhat.com/show_bug.cgi?id=1434066
i have been faced to another issue when using the trashcan feature on a
dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4)
for e.g. removing an entire directory with subfolders :
tron at gl-node1:/myvol-1/test1/b1$ rm -rf *
afterwards listing files in the trashcan :
tron at gl-node1:/myvol-1/test1$
2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
Hi Dietmar,
I am trying to understand the problem and have few questions.
1. Is trashcan enabled only on master volume?
2. Does the 'rm -rf' done on master volume synced to slave ?
3. If trashcan is disabled, the issue goes away?
The geo-rep error just says the it failed to create the directory
"Oracle_VM_VirtualBox_Extension" on slave.
Usually this would be because of gfid
2018 Jan 19
2
geo-replication command rsync returned with 3
Dear All,
we are running a dist. repl. volume on 4 nodes including geo-replication
to another location.
the geo-replication was running fine for months.
since 18th jan. the geo-replication is faulty. the geo-rep log on the
master shows following error in a loop while the logs on the slave just
show 'I'nformations...
somehow suspicious are the frequent 'shutting down connection'
2018 Jan 19
0
geo-replication command rsync returned with 3
Fwiw, rsync error 3 is:
"Errors selecting input/output files, dirs"
On January 19, 2018 7:36:18 AM PST, Dietmar Putz <dietmar.putz at 3qsdn.com> wrote:
>Dear All,
>
>we are running a dist. repl. volume on 4 nodes including
>geo-replication
>to another location.
>the geo-replication was running fine for months.
>since 18th jan. the geo-replication is faulty.
2018 Jan 25
2
geo-replication command rsync returned with 3
Hi Kotresh,
thanks for your response...
i have made further tests based on ubuntu 16.04.3 (latest upgrades) and
gfs 3.12.5 with following rsync version :
1. ii? rsync????????????????????????????? 3.1.1-3ubuntu1
2. ii? rsync????????????????????????????? 3.1.1-3ubuntu1.2
3. ii? rsync????????????????????????????? 3.1.2-2ubuntu0.1
in each test all nodes had the same rsync version installed. all
2018 Jan 29
0
geo-replication command rsync returned with 3
Hi all,
by downgrade of
ii? libc6:amd64 2.23-0ubuntu10
to
ii? libc6:amd64 2.23-0ubuntu3
the problem was solved. at least in gfs testenvironment running gfs
3.13.2 and 3.7.20 and on our productive environment with 3.7.18.
possibly it helps someone...
best regards
Dietmar
Am 25.01.2018 um 14:06 schrieb Dietmar Putz:
>
> Hi Kotresh,
>
> thanks for your response...
>
> i
2018 Jan 24
4
geo-replication command rsync returned with 3
Hi all,
i have made some tests on the latest Ubuntu 16.04.3 server image.
Upgrades were disabled...
the configuration was always the same...a distributed replicated volume
on 4 VM's with geo-replication to a dist. repl .volume on 4 VM's.
i started with 3.7.20, upgrade to 3.8.15, to 3.10.9 to 3.12.5. After
each upgrade i have tested the geo-replication which worked well anytime.
then i
2018 Jan 25
0
geo-replication command rsync returned with 3
It is clear that rsync is failing. Are the rsync versions on all masters
and slave nodes same?
I have seen that has caused problems sometimes.
-Kotresh HR
On Wed, Jan 24, 2018 at 10:29 PM, Dietmar Putz <dietmar.putz at 3qsdn.com>
wrote:
> Hi all,
> i have made some tests on the latest Ubuntu 16.04.3 server image. Upgrades
> were disabled...
> the configuration was always the
2018 Jan 12
0
Creating cluster replica on 2 nodes 2 bricks each.
---------- Forwarded message ----------
From: Jose Sanchez <josesanc at carc.unm.edu>
Date: 11 January 2018 at 22:05
Subject: Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks
each.
To: Nithya Balachandran <nbalacha at redhat.com>
Cc: gluster-users <gluster-users at gluster.org>
Hi Nithya
Thanks for helping me with this, I understand now , but I have few
2017 Jul 30
1
Lose gnfs connection during test
Hi all
I use Distributed-Replicate(12 x 2 = 24) hot tier plus
Distributed-Replicate(36 x (6 + 2) = 288) cold tier with gluster3.8.4
for performance test. When i set client/server.event-threads as small
values etc 2, it works ok. But if set client/server.event-threads as big
values etc 32, the netconnects will always become un-available during
the test, with following error messages in stree
2010 Apr 10
3
nfs-alpha feedback
I ran the same dd tests from KnowYourNFSAlpha-1.pdf and performance is
inconsistent and causes the server to become unresponsive.
My server freezes every time when I run the following command:
dd if=/dev/zero of=garb bs=256k count=64000
I would also like to mount a path like: /volume/some/random/dir
# mount host:/gluster/tmp /mnt/test
mount: host:/gluster/tmp failed, reason given by server: No
2018 Jan 11
3
Creating cluster replica on 2 nodes 2 bricks each.
Hi Nithya
Thanks for helping me with this, I understand now , but I have few questions.
When i had it setup in replica (just 2 nodes with 2 bricks) and tried to added , it failed.
> [root at gluster01 ~]# gluster volume add-brick scratch replica 2 gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch
> volume add-brick: failed: /gdata/brick2/scratch is already part of a
2018 Apr 16
2
Gluster FUSE mount sometimes reports that files do not exist until ls is performed on parent directory
Hi,
We have a 3-node gluster setup where gluster is both the server and the
client.
Every few days we have some $random file or directory that does not exist
according to the FUSE mountpoint. When we try to access the file (stat,
cat, etc...) the filesystem reports that the file/directory does not exist,
even though it does. When we try to create the file/directory we receive
the following error
2018 Apr 16
0
Gluster FUSE mount sometimes reports that files do not exist until ls is performed on parent directory
On Mon, Apr 16, 2018 at 1:54 PM, Niels Hendriks <niels at nuvini.com> wrote:
> Hi,
>
> We have a 3-node gluster setup where gluster is both the server and the
> client.
> Every few days we have some $random file or directory that does not exist
> according to the FUSE mountpoint. When we try to access the file (stat,
> cat, etc...) the filesystem reports that the
2018 Apr 16
1
Gluster FUSE mount sometimes reports that files do not exist until ls is performed on parent directory
On 16 April 2018 at 14:07, Raghavendra Gowdappa <rgowdapp at redhat.com> wrote:
>
>
> On Mon, Apr 16, 2018 at 1:54 PM, Niels Hendriks <niels at nuvini.com> wrote:
>
>> Hi,
>>
>> We have a 3-node gluster setup where gluster is both the server and the
>> client.
>> Every few days we have some $random file or directory that does not exist
2017 Oct 24
2
brick is down but gluster volume status says it's fine
gluster version 3.10.6, replica 3 volume, daemon is present but does not
appear to be functioning
peculiar behaviour. If I kill the glusterfs brick daemon and restart
glusterd then the brick becomes available - but one of my other volumes
bricks on the same server goes down in the same way it's like wack-a-mole.
any ideas?
[root at gluster-2 bricks]# glv status digitalcorpora
> Status
2017 Oct 24
0
brick is down but gluster volume status says it's fine
On Tue, Oct 24, 2017 at 11:13 PM, Alastair Neil <ajneil.tech at gmail.com>
wrote:
> gluster version 3.10.6, replica 3 volume, daemon is present but does not
> appear to be functioning
>
> peculiar behaviour. If I kill the glusterfs brick daemon and restart
> glusterd then the brick becomes available - but one of my other volumes
> bricks on the same server goes down in