Displaying 20 results from an estimated 400 matches similar to: "Latest glusterfs 3.8.5 server not compatible with livbirt libgfapi access"
2018 Jan 19
2
geo-replication command rsync returned with 3
Dear All,
we are running a dist. repl. volume on 4 nodes including geo-replication
to another location.
the geo-replication was running fine for months.
since 18th jan. the geo-replication is faulty. the geo-rep log on the
master shows following error in a loop while the logs on the slave just
show 'I'nformations...
somehow suspicious are the frequent 'shutting down connection'
2018 Jan 19
0
geo-replication command rsync returned with 3
Fwiw, rsync error 3 is:
"Errors selecting input/output files, dirs"
On January 19, 2018 7:36:18 AM PST, Dietmar Putz <dietmar.putz at 3qsdn.com> wrote:
>Dear All,
>
>we are running a dist. repl. volume on 4 nodes including
>geo-replication
>to another location.
>the geo-replication was running fine for months.
>since 18th jan. the geo-replication is faulty.
2013 Jul 02
1
RDMA Volume Mount Not Functioning for Debian 3.4.beta4
Anyone ele having these issues mounting RDMA only volumes under Ubuntu? Install of Gluster via: https://launchpad.net/~semiosis/+archive/ubuntu-glusterfs-3.4
I'm guessing this build is looking for an include somewhere that is just misplaced?
glusterfs 3.4.0beta4 built on Jun 28 2013 16:16:07
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc.
2009 Jun 05
1
DRBD+GFS - Logical Volume problem
Hi list.
I am dealing with DRBD (+GFS as its DLM). GFS configuration needs a
CLVMD configuration. So, after syncronized my (two) /dev/drbd0 block
devices, I start the clvmd service and try to create a clustered
logical volume. I get this:
On "alice":
[root at alice ~]# pvcreate /dev/drbd0
Physical volume "/dev/drbd0" successfully created
[root at alice ~]# vgcreate
2017 Jul 10
0
Very slow performance on Sharded GlusterFS
Hi Krutika,
May I kindly ping to you and ask that If you have any idea yet or figured out whats the issue may?
I am awaiting your reply with four eyes :)
Apologies for the ping :)
-Gencer.
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of gencer at gencgiyen.com
Sent: Thursday, July 6, 2017 11:06 AM
To: 'Krutika
2017 Jul 12
1
Very slow performance on Sharded GlusterFS
Hi,
Sorry for the late response.
No, so eager-lock experiment was more to see if the implementation had any
new bugs.
It doesn't look like it does. I think having it on would be the right thing
to do. It will reduce the number of fops having to go over the network.
Coming to the performance drop, I compared the volume profile output for
stripe and 32MB shard again.
The only thing that is
2017 Jul 06
2
Very slow performance on Sharded GlusterFS
Hi Krutika,
I also did one more test. I re-created another volume (single volume. Old one destroyed-deleted) then do 2 dd tests. One for 1GB other for 2GB. Both are 32MB shard and eager-lock off.
Samples:
sr:~# gluster volume profile testvol start
Starting volume profile on testvol has been successful
sr:~# dd if=/dev/zero of=/testvol/dtestfil0xb bs=1G count=1
1+0 records in
1+0
2013 Dec 04
1
Testing failover and recovery
Hello,
I've found GlusterFS to be an interesting project. Not so much experience
of it
(although from similar usecases with DRBD+NFS setups) so I setup some
testcase to try out failover and recovery.
For this I have a setup with two glusterfs servers (each is a VM) and one
client (also a VM).
I'm using GlusterFS 3.4 btw.
The servers manages a gluster volume created as:
gluster volume
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
Hi,
Sorry i did't confirm the results sooner.
Yes, it's working fine without issues for me.
If anyone else can confirm so we can be sure it's 100% resolved.
--
Respectfully
Mahdi A. Mahdi
________________________________
From: Krutika Dhananjay <kdhananj at redhat.com>
Sent: Tuesday, June 6, 2017 9:17:40 AM
To: Mahdi Adnan
Cc: gluster-user; Gandalf Corvotempesta; Lindsay
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
Any additional tests would be great as a similiar bug was detected and
fixed some months ago and after that, this bug arose?.
Is still unclear to me why two very similiar bug was discovered in two
different times for the same operation
How this is possible?
If you fixed the first bug, why the second one wasn't triggered on your
test environment?
Il 6 giu 2017 10:35 AM, "Mahdi
2017 Jun 05
0
Rebalance + VM corruption - current status and request for feedback
The fixes are already available in 3.10.2, 3.8.12 and 3.11.0
-Krutika
On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> Great news.
> Is this planned to be published in next release?
>
> Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha
> scritto:
>
>> Thanks for that update.
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
Great, thanks!
Il 5 giu 2017 6:49 AM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto:
> The fixes are already available in 3.10.2, 3.8.12 and 3.11.0
>
> -Krutika
>
> On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta <
> gandalf.corvotempesta at gmail.com> wrote:
>
>> Great news.
>> Is this planned to be published in next
1997 Oct 22
0
R-alpha: na.woes
1) hist() does not take NA's. Incompatible with Splus, probably just a
bug?
2) I do wish we could somehow get rid of the misfeatures of indexing
with logical NA's:
> table(juul$menarche,juul$tanner)
I II III IV V
No 221 43 32 14 2
Yes 1 1 5 26 202
> juul$menarche=="Yes"&juul$tanner=="I",]
...and you find yourself with a listing of 477
2019 Jun 12
1
Proper command for replace-brick on distribute–replicate?
On 12/06/19 1:38 PM, Alan Orth wrote:
> Dear Ravi,
>
> Thanks for the confirmation?I replaced a brick in a volume last night
> and by the morning I see that Gluster has replicated data there,
> though I don't have any indication of its progress. The `gluster v
> heal volume info` and `gluster v heal volume info split-brain` are all
> looking good so I guess that's
2017 Jun 04
2
Rebalance + VM corruption - current status and request for feedback
Great news.
Is this planned to be published in next release?
Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha
scritto:
> Thanks for that update. Very happy to hear it ran fine without any issues.
> :)
>
> Yeah so you can ignore those 'No such file or directory' errors. They
> represent a transient state where DHT in the client process
2017 Jun 06
2
Rebalance + VM corruption - current status and request for feedback
Hi Mahdi,
Did you get a chance to verify this fix again?
If this fix works for you, is it OK if we move this bug to CLOSED state and
revert the rebalance-cli warning patch?
-Krutika
On Mon, May 29, 2017 at 6:51 PM, Mahdi Adnan <mahdi.adnan at outlook.com>
wrote:
> Hello,
>
>
> Yes, i forgot to upgrade the client as well.
>
> I did the upgrade and created a new volume,
2009 Jan 24
2
rsync with --copy-devices patch and device-target with --write-batch doesnt work
Hi List!
I want to use rsync to create differential backups of my lvm-snapshots.
fullbackup-filename: /mnt/sdc1/snapshotvergleich/rootbackup1.img
current snapshot: /dev/vg0/rootbackup
note: compiled-in --copy-devices-patch
root@xp8main3:/usr/local/src/rsync# ./rsync --version
rsync version 3.1.0dev protocol version 31.PR5
Copyright (C) 1996-2009 by Andrew Tridgell, Wayne Davison, and
2017 Jul 06
0
Very slow performance on Sharded GlusterFS
Krutika, I?m sorry I forgot to add logs. I attached them now.
Thanks,
Gencer.
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of gencer at gencgiyen.com
Sent: Thursday, July 6, 2017 10:27 AM
To: 'Krutika Dhananjay' <kdhananj at redhat.com>
Cc: 'gluster-user' <gluster-users at gluster.org>
Subject: Re:
2017 Jul 04
0
Very slow performance on Sharded GlusterFS
Hi Krutika,
Thank you so much for myour reply. Let me answer all:
1. I have no idea why it did not get distributed over all bricks.
2. Hm.. This is really weird.
And others;
No. I use only one volume. When I tested sharded and striped volumes, I manually stopped volume, deleted volume, purged data (data inside of bricks/disks) and re-create by using this command:
sudo gluster
2017 Jun 30
1
Very slow performance on Sharded GlusterFS
Just noticed that the way you have configured your brick order during
volume-create makes both replicas of every set reside on the same machine.
That apart, do you see any difference if you change shard-block-size to
512MB? Could you try that?
If it doesn't help, could you share the volume-profile output for both the
tests (separate)?
Here's what you do:
1. Start profile before starting