similar to: trashcan on dist. repl. volume with geo-replication

Displaying 20 results from an estimated 300 matches similar to: "trashcan on dist. repl. volume with geo-replication"

2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
Hi Dietmar, I am trying to understand the problem and have few questions. 1. Is trashcan enabled only on master volume? 2. Does the 'rm -rf' done on master volume synced to slave ? 3. If trashcan is disabled, the issue goes away? The geo-rep error just says the it failed to create the directory "Oracle_VM_VirtualBox_Extension" on slave. Usually this would be because of gfid
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
Hi Kotresh, thanks for your repsonse... answers inside... best regards Dietmar Am 13.03.2018 um 06:38 schrieb Kotresh Hiremath Ravishankar: > Hi Dietmar, > > I am trying to understand the problem and have few questions. > > 1. Is trashcan enabled only on master volume? no, trashcan is also enabled on slave. settings are the same as on master but trashcan on slave is complete
2018 Jan 19
2
geo-replication command rsync returned with 3
Dear All, we are running a dist. repl. volume on 4 nodes including geo-replication to another location. the geo-replication was running fine for months. since 18th jan. the geo-replication is faulty. the geo-rep log on the master shows following error in a loop while the logs on the slave just show 'I'nformations... somehow suspicious are the frequent 'shutting down connection'
2018 Jan 19
0
geo-replication command rsync returned with 3
Fwiw, rsync error 3 is: "Errors selecting input/output files, dirs" On January 19, 2018 7:36:18 AM PST, Dietmar Putz <dietmar.putz at 3qsdn.com> wrote: >Dear All, > >we are running a dist. repl. volume on 4 nodes including >geo-replication >to another location. >the geo-replication was running fine for months. >since 18th jan. the geo-replication is faulty.
2018 Jan 24
4
geo-replication command rsync returned with 3
Hi all, i have made some tests on the latest Ubuntu 16.04.3 server image. Upgrades were disabled... the configuration was always the same...a distributed replicated volume on 4 VM's with geo-replication to a dist. repl .volume on 4 VM's. i started with 3.7.20, upgrade to 3.8.15, to 3.10.9 to 3.12.5. After each upgrade i have tested the geo-replication which worked well anytime. then i
2018 Jan 25
0
geo-replication command rsync returned with 3
It is clear that rsync is failing. Are the rsync versions on all masters and slave nodes same? I have seen that has caused problems sometimes. -Kotresh HR On Wed, Jan 24, 2018 at 10:29 PM, Dietmar Putz <dietmar.putz at 3qsdn.com> wrote: > Hi all, > i have made some tests on the latest Ubuntu 16.04.3 server image. Upgrades > were disabled... > the configuration was always the
2018 Jan 25
2
geo-replication command rsync returned with 3
Hi Kotresh, thanks for your response... i have made further tests based on ubuntu 16.04.3 (latest upgrades) and gfs 3.12.5 with following rsync version : 1. ii? rsync????????????????????????????? 3.1.1-3ubuntu1 2. ii? rsync????????????????????????????? 3.1.1-3ubuntu1.2 3. ii? rsync????????????????????????????? 3.1.2-2ubuntu0.1 in each test all nodes had the same rsync version installed. all
2018 Jan 29
0
geo-replication command rsync returned with 3
Hi all, by downgrade of ii? libc6:amd64 2.23-0ubuntu10 to ii? libc6:amd64 2.23-0ubuntu3 the problem was solved. at least in gfs testenvironment running gfs 3.13.2 and 3.7.20 and on our productive environment with 3.7.18. possibly it helps someone... best regards Dietmar Am 25.01.2018 um 14:06 schrieb Dietmar Putz: > > Hi Kotresh, > > thanks for your response... > > i
2017 Jun 29
0
setting gfid on .trashcan/... failed - total outage
On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote: > Hello, > > recently we had two times a partial gluster outage followed by a total? > outage of all four nodes. Looking into the gluster mailing list i found? > a very similar case in? > http://lists.gluster.org/pipermail/gluster-users/2016-June/027124.html If you are talking about a crash happening on bricks, were you
2017 Jun 29
1
setting gfid on .trashcan/... failed - total outage
Hello Anoop, thank you for your reply.... answers inside... best regards Dietmar On 29.06.2017 10:48, Anoop C S wrote: > On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote: >> Hello, >> >> recently we had two times a partial gluster outage followed by a total >> outage of all four nodes. Looking into the gluster mailing list i found >> a very similar case
2017 Jun 28
2
setting gfid on .trashcan/... failed - total outage
Hello, recently we had two times a partial gluster outage followed by a total outage of all four nodes. Looking into the gluster mailing list i found a very similar case in http://lists.gluster.org/pipermail/gluster-users/2016-June/027124.html but i'm not sure if this issue is fixed... even this outage happened on glusterfs 3.7.18 which gets no more updates since ~.20 i would kindly ask
2017 Nov 25
1
How to read geo replication timestamps from logs
Folks, need help interpreting this message from my geo rep logs for my volume mojo. ssh%3A%2F%2Froot%40173.173.241.2%3Agluster%3A%2F%2F127.0. 0.1%3Amojo-remote.log:[2017-11-22 00:59:40.610574] I [master(/bricks/lsi/mojo):1125:crawl] _GMaster: slave's time: (1511312352, 0) The epoch of 1511312352 is Tuesday, November 21, 2017 12:59:12 AM GMT. The clocks are using the same ntp stratum and
2018 Mar 06
1
geo replication
Hi, Have problems with geo replication on glusterfs 3.12.6 / Ubuntu 16.04. I can see a ?master volinfo unavailable? in master logfile. Any ideas? Master: Status of volume: testtomcat Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfstest07:/gfs/testtomcat/mount 49153 0
2012 Feb 22
3
Error 400 on SERVER: Cannot append, variable node_data is defined in this scope at
Hi, I have an problem that I can''t get resolved. I have an hash like www.krzywanski.net/archives/703. With this hash i whould like the add some extra hashes before passing to the module, i have tryed the code below. node testnode { class { ''testclass'': nodes_data => { ''node1'' => { ''server'' =>
2014 Nov 12
2
Connection failing between 2 nodes with dropped packets error
Hi, I'm sometimes getting a failure of connecting 2 nodes when Tinc is started and configured in a LAN. In the logs, there are some unexpected dropped packets with very high or negative seq. I can reproduce this issue ~2% of the time. When this happens, the 2 nodes can no longer ping or ssh each other through the tunnel interface but using eth0 works fine. The connection can recover after at
2005 Aug 08
3
Reg. getting codewords from codelengths
Hi, I am a bit confused on how code-words are derived from the codeword lengths. I will appreciate if someone can point me in the correct direction. I will take the example of an actual codebook that i found in a valid vorbis encoded file as shown below. [SK] +------Codebook [0] -------- [SK] Codebook Dimensions = 1 [SK] Codebook Entries = 8 [SK] Unordered [SK] 1, 6, 3, 7, 2, 5, 4, 7, [SK] NO
2015 May 18
2
tinc stopped working after restart
Hi. I'm in desperate need of some good advice. I have a tinc network with 16 nodes. It's a star topology where all nodes are connecting to the one node (Node1) that have a static IP. Node 1 accepts incomming connections Node 2 through 16 connects to Node1 One of the nodes (Node5) stopped working a while ago (2 - 3 weeks or so), other than that everything was working fine. Today I
2010 Jan 31
2
[LLVMdev] Crash in PBQP register allocator
Hi Sebastian, It boils down to this: The previous heuristic solver could return infinite cost solutions in some rare cases (despite finite-cost solutions existing). The new solver is still heuristic, but it should always return a finite cost solution if one exists. It does this by avoiding early reduction of infinite spill cost nodes via R1 or R2. To illustrate why the early reductions can be a
2012 Sep 29
1
quota severe performace issue help
Dear gluster experts, We have encountered a severe performance issue related to quota feature of gluster. My underlying fs is lvm with xfs format. The problem is if quota is enabled the io performance is about 26MB/s but with quota disabled the io performance is 216MB/s. Any one known what's the problem? BTW I have reproduce it several times and it is related to quota indeed. Here's the
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes I have set up two replica 2 arbiter 1 volumes with 9 bricks [root at gfs1 ~]# gluster volume info Volume Name: gfsvol Type: Distributed-Replicate Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gfs2:/gfs/brick1/gv0 Brick2: