similar to: Issue in using volume sync command

Displaying 20 results from an estimated 4000 matches similar to: "Issue in using volume sync command"

2013 Aug 08
2
not able to restart the brick for distributed volume
Hi All, I am facing issues restarting the gluster volume. When I start the volume after stopping it, the gluster fails to start the volume Below is the message that I get on CLI /root> gluster volume start _home volume start: _home: failed: Commit failed on localhost. Please check the log file for more details. Logs says that it was unable to start the brick [2013-08-08
2013 Feb 28
0
Issue in the syncing operation for the same file on two nodes
Hi All, Below are the sequence of steps taken by me produce the issue 1) I created a file named DEMO of size 10K on one of the replicate gluster volume on SERVER 1 and it got replicated on the other gluster volume of SERVER 2 as usual. 2) After this, I unmounted the brick of SERVER 2 and stopped the glusterd service on SERVER 2. 3) I then filled the DEMO file to the size of 2M on the SERVER
2013 Oct 08
0
NFS client side - ls command giving input/output error
Hi All, I have created a distributed volume on one server which is also a DHCP server I am booting another blade which gets its kernel image from the DHCP server via the NFS mount which is a gluter distributed volumed This volume is successfully getting NFS mounted on the client side, I can even do cd to the directories contained in the NFS mount, touch, cat and cp operations on a file in
2013 Oct 14
0
Glusterfs 3.4.1 not able to mount the exports containing soft links
Hi, I am running the glusterFS 3.4.1 NFS server. I have created a distributed volume on server 1 , and I am trying to mount the soft link contained in the volume as NFS from server 2 . but it is failing with the error " mount.nfs: an incorrect mount option was specified" Below is the volume in the server 1 that I am trying to export server 1 sh# gluster volume info all Volume
2009 Jun 05
1
DRBD+GFS - Logical Volume problem
Hi list. I am dealing with DRBD (+GFS as its DLM). GFS configuration needs a CLVMD configuration. So, after syncronized my (two) /dev/drbd0 block devices, I start the clvmd service and try to create a clustered logical volume. I get this: On "alice": [root at alice ~]# pvcreate /dev/drbd0 Physical volume "/dev/drbd0" successfully created [root at alice ~]# vgcreate
2004 Apr 20
1
Samba 3.0.2a - Erroneously rejects NTLMv2 but accepts NTLM
Hello experts, I?ll try and keep this brief but detailed (if that?s possible.). I?m sure I don?t understand the technologies sufficiently but I believe I?m seeing counter-intuitive behavior with my Samba 3 setup. What I want is nice, tight Win 2K3 security. What I?ve got is ADS integration, including domain user authentication using winbind, but I can?t get the security level right. Problem
2005 Jan 15
7
Access denied changing file attributes
Hi! I've been tearing my hair out trying to get DOS file attributes to work with Samba. Basically, I have it all set up so the user mbolingbroke (me) can write to this Supernova Backup share I have - this all works fine. However, since this is going to backup my Windows machine I want to preserve the file attributes. To this end, I've set up mapping of the attributess using "map
2013 Dec 04
1
Testing failover and recovery
Hello, I've found GlusterFS to be an interesting project. Not so much experience of it (although from similar usecases with DRBD+NFS setups) so I setup some testcase to try out failover and recovery. For this I have a setup with two glusterfs servers (each is a VM) and one client (also a VM). I'm using GlusterFS 3.4 btw. The servers manages a gluster volume created as: gluster volume
2005 May 18
1
Samba Comple Problem on Solaris 2.8
So I am compling the newest release of samba 3.0.14a on Solaris 2.8. I can get it to compile but I don't get all the built in modules that I should and smbd when fired up bombs out with something like, ------- derek@supernova:/opt/UMsmb/sbin# ./smbd -c /etc/samba/smb.conf -i smbd version 3.0.14a started. Copyright Andrew Tridgell and the Samba Team 1992-2004 No builtin nor plugin backend for
2018 Jan 24
1
fault tolerancy in glusterfs distributed volume
I have made a distributed replica3 volume with 6 nodes. I mean this: Volume Name: testvol Type: Distributed-Replicate Volume ID: f271a9bd-6599-43e7-bc69-26695b55d206 Status: Started Snapshot Count: 0 Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: 10.0.0.2:/brick Brick2: 10.0.0.3:/brick Brick3: 10.0.0.1:/brick Brick4: 10.0.0.5:/brick Brick5: 10.0.0.6:/brick Brick6:
2016 Nov 02
0
Latest glusterfs 3.8.5 server not compatible with livbirt libgfapi access
Hi, After updating glusterfs server to 3.8.5 (from Centos-gluster-3.8.repo) the KVM virtual machines (qemu-kvm-ev-2.3.0-31) that access storage using libgfapi are no longer able to start. The libvirt log file shows: [2016-11-02 14:26:41.864024] I [MSGID: 104045] [glfs-master.c:91:notify] 0-gfapi: New graph 73332d32-3937-3130-2d32-3031362d3131 (0) coming up [2016-11-02 14:26:41.864075] I [MSGID:
2017 Jul 10
0
Very slow performance on Sharded GlusterFS
Hi Krutika, May I kindly ping to you and ask that If you have any idea yet or figured out whats the issue may? I am awaiting your reply with four eyes :) Apologies for the ping :) -Gencer. From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of gencer at gencgiyen.com Sent: Thursday, July 6, 2017 11:06 AM To: 'Krutika
2012 Nov 27
1
Performance after failover
Hey, all. I'm currently trying out GlusterFS 3.3. I've got two servers and four clients, all on separate boxes. I've got a Distributed-Replicated volume with 4 bricks, two from each server, and I'm using the FUSE client. I was trying out failover, currently testing for reads. I was reading a big file, using iftop to see which server was actually being read from. I put up an
2017 Jul 12
1
Very slow performance on Sharded GlusterFS
Hi, Sorry for the late response. No, so eager-lock experiment was more to see if the implementation had any new bugs. It doesn't look like it does. I think having it on would be the right thing to do. It will reduce the number of fops having to go over the network. Coming to the performance drop, I compared the volume profile output for stripe and 32MB shard again. The only thing that is
2017 Jul 06
2
Very slow performance on Sharded GlusterFS
Hi Krutika, I also did one more test. I re-created another volume (single volume. Old one destroyed-deleted) then do 2 dd tests. One for 1GB other for 2GB. Both are 32MB shard and eager-lock off. Samples: sr:~# gluster volume profile testvol start Starting volume profile on testvol has been successful sr:~# dd if=/dev/zero of=/testvol/dtestfil0xb bs=1G count=1 1+0 records in 1+0
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
Hi, Sorry i did't confirm the results sooner. Yes, it's working fine without issues for me. If anyone else can confirm so we can be sure it's 100% resolved. -- Respectfully Mahdi A. Mahdi ________________________________ From: Krutika Dhananjay <kdhananj at redhat.com> Sent: Tuesday, June 6, 2017 9:17:40 AM To: Mahdi Adnan Cc: gluster-user; Gandalf Corvotempesta; Lindsay
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
Any additional tests would be great as a similiar bug was detected and fixed some months ago and after that, this bug arose?. Is still unclear to me why two very similiar bug was discovered in two different times for the same operation How this is possible? If you fixed the first bug, why the second one wasn't triggered on your test environment? Il 6 giu 2017 10:35 AM, "Mahdi
2017 Jun 05
0
Rebalance + VM corruption - current status and request for feedback
The fixes are already available in 3.10.2, 3.8.12 and 3.11.0 -Krutika On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote: > Great news. > Is this planned to be published in next release? > > Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha > scritto: > >> Thanks for that update.
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
Great, thanks! Il 5 giu 2017 6:49 AM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto: > The fixes are already available in 3.10.2, 3.8.12 and 3.11.0 > > -Krutika > > On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta < > gandalf.corvotempesta at gmail.com> wrote: > >> Great news. >> Is this planned to be published in next
1997 Oct 22
0
R-alpha: na.woes
1) hist() does not take NA's. Incompatible with Splus, probably just a bug? 2) I do wish we could somehow get rid of the misfeatures of indexing with logical NA's: > table(juul$menarche,juul$tanner) I II III IV V No 221 43 32 14 2 Yes 1 1 5 26 202 > juul$menarche=="Yes"&juul$tanner=="I",] ...and you find yourself with a listing of 477