similar to: Problems with striped-replicated volumes on 3.3.1

Displaying 20 results from an estimated 800 matches similar to: "Problems with striped-replicated volumes on 3.3.1"

2012 Oct 26
1
How to resolve split-brain and replication failures?
Hi All, What is the recommended procedures for resolving split-brain conditions on files and replication failures? Thanks, Michael ______________________________________________________________________________________ Michael Kushnir System Architect / Engineer Communications Engineering Branch Lister Hill National Center for Biomedical Communications National Library of Medicine 8600
2012 Oct 31
2
Best practices for creating bricks
Hello, I am working with several Dell PE720xd. I have 24 disks per server at my disposal with a high end raid card with 1GB RAM and BBC. I will be building a distributed-replicated volume. Is it better for me to set up one or two large RAID0 arrays and use those as bricks, or should I make each hard drive a brick? This will be back end storage for an image search engine with lots of small file
2013 Apr 30
1
3.3.1 distributed-striped-replicated volume
I'm hitting the 'cannot find stripe size' bug: [2013-04-29 17:42:24.508332] E [stripe-helpers.c:268:stripe_ctx_handle] 0-gv0-stripe-0: Failed to get stripe-size [2013-04-29 17:42:24.513013] W [fuse-bridge.c:968:fuse_err_cbk] 0-glusterfs-fuse: 867: FSYNC() ERR => -1 (Invalid argument) Is there a fix for this in 3.3.1 or do we need to move to git HEAD to make this work? M. --
2013 Jan 25
1
Striped Replicated Volumes: create files error.
Hi there, each time I copy (or dd or similar) a file to a striped replicated volume I get an error: the argument is not valid. An empty file is created. If I now run the copy, it works. This is in independed of the client platform. We are using version 3.3.1 Mit freundlichen Gr??en / Kind regards Axel Weber -------------- next part -------------- An HTML attachment was scrubbed... URL:
2012 Mar 16
1
replicated-striped volume growing question
Hy, I have the following question: If I build a replicated-striped volume (one replica and one stripe) when I want to grow that volume I can grow it adding one brick and its replica or I have to add the stripe and Its replica also? Hope you can help me, thanks in advance Juan Brenes Imprima este correo solo si es necesario. Act?e responsablemente con el Medio Ambiente.
2011 Dec 01
2
Creating striped replicated volume
We are having trouble creating a stripe 2 replica 2 volume across 4 hosts: user at gluster-fs-host-0:/gfsr$ sudo gluster volume create sr stripe 2 replica 2 glusterfs-host-0:/gfsr glusterfs-host-1:/gfsr glusterfs-host-2:/gfsr glusterfs-host-3:/gfsr wrong brick type: replica, use <HOSTNAME>:<export-dir-abs-path> We are on glusterfs 3.2.5
2011 Dec 08
1
Can't create striped replicated volume
Hi, I'm trying to create striped replicated volume but getting tis error: gluster volume create cloud stripe 4 replica 2 transport tcp nebula1:/dataPool nebula2:/dataPool nebula3:/dataPool nebula4:/dataPool wrong brick type: replica, use<HOSTNAME>:<export-dir-abs-path> Usage: volume create<NEW-VOLNAME> [stripe<COUNT>] [replica<COUNT>]
2012 Jun 01
3
Striped replicated volumes in Gluster 3.3.0
Hi all, I'm very happy to see the release of 3.3.0. One of the features I was waiting for are striped replicated volumes. We plan to store KVM images (from a OpenStack installation) on it. I read through the docs and found the following phrase: "In this release, configuration of this volume type is supported only for Map Reduce workloads." What does that mean exactly? Hopefully not,
2012 Feb 23
1
default cluster.stripe-block-size for striped volumes on 3.0.x vs 3.3 beta (128kb), performance change if i reduce to a smaller block size?
Hi, I've been migrating data from an old striped 3.0.x gluster install to a 3.3 beta install. I copied all the data to a regular XFS partition (4K blocksize) from the old gluster striped volume and it totaled 9.2TB. With the old setup I used the following option in a "volume stripe" block in the configuration file in a client : volume stripe type cluster/stripe option
2001 Aug 03
0
where to get non-striped binaries?
Hello there, I'm trying to send a bug report in with bug_report.pl, and for now it complains my binaries are stripped. I'm not sure what that means, nor where I should be getting non-stripped binaries. What I got came from my debian source.list : # daily debian wine, strait from cvs deb http://gluck.debian.org/%7Eandreas/debian wine main deb-src http://gluck.debian.org/%7Eandreas/debian
2013 Sep 05
1
NFS cann't use by esxi with Striped Volume
After some test , I confirm that Esxi cann't use Striped-Replicate volume on Glusterfs's nfs. But could success on Distributed-Replicate . Anyone know how or why ? 2013/9/5 higkoohk <higkoohk at gmail.com> > Thanks Vijay ! > > It run success after 'volume set images-stripe nfs.nlm off'. > > Now I can use Esxi with Glusterfs's nfs export . > > Many
2007 Nov 26
15
bad 1.6.3 striped write performance
Hi, I''m seeing what can only be described as dismal striped write performance from lustre 1.6.3 clients :-/ 1.6.2 and 1.6.1 clients are fine. 1.6.4rc3 clients (from cvs a couple of days ago) are also terrible. the below shows that the OS (centos4.5/5) or fabric (gigE/IB) or lustre version on the servers doesn''t matter - the problem is with the 1.6.3 and 1.6.4rc3 client kernels
2010 Oct 04
8
Can I "upgrade" a striped pool of vdevs to mirrored vdevs?
Hi, once I created a zpool of single vdevs not using mirroring of any kind. Now I wonder if it''s possible to add vdevs and mirror the currently existing ones. Thanks, budy -- This message posted from opensolaris.org
2008 Dec 09
1
File uploaded to webDAV server on GlusterFS AFR - ends up without xattr!
Hello list. I'm testing GlusterFS AFR mode as a solution for implementing a highly available webDAV file storage for our production environment. Whlie doing performance tests I've notticed a strange behavior: the files which are uploaded via a webDAV server, end up without extended attributes, which removes the ability to self-heal. The set up is a simple testing environment with 2
2010 Mar 02
2
crash when using the cp command to copy files off a striped gluster dir but not when using rsync
Hi, I've got this strange problem where a striped endpoint will crash when I try to use cp to copy files off of it but not when I use rsync to copy files off: [user at gluster5 user]$ cp -r Python-2.6.4/ ~/tmp/ cp: reading `Python-2.6.4/Lib/lib2to3/tests/data/fixers/myfixes/__init__.py': Software caused connection abort cp: closing
2018 Feb 05
2
Fwd: Troubleshooting glusterfs
Hi, I see a lot of the following messages in the logs: [2018-02-04 03:22:01.544446] I [glusterfsd-mgmt.c:1821:mgmt_getspec_cbk] 0-glusterfs: No change in volfile,continuing [2018-02-04 07:41:16.189349] W [MSGID: 109011] [dht-layout.c:186:dht_layout_search] 48-gv0-dht: no subvolume for hash (value) = 122440868 [2018-02-04 07:41:16.244261] W [fuse-bridge.c:2398:fuse_writev_cbk] 0-glusterfs-fuse:
2018 Feb 05
0
Fwd: Troubleshooting glusterfs
On 5 February 2018 at 15:40, Nithya Balachandran <nbalacha at redhat.com> wrote: > Hi, > > > I see a lot of the following messages in the logs: > [2018-02-04 03:22:01.544446] I [glusterfsd-mgmt.c:1821:mgmt_getspec_cbk] > 0-glusterfs: No change in volfile,continuing > [2018-02-04 07:41:16.189349] W [MSGID: 109011] > [dht-layout.c:186:dht_layout_search] 48-gv0-dht: no
2018 Feb 05
2
Fwd: Troubleshooting glusterfs
Hello Nithya! Thank you so much, I think we are close to build a stable storage solution according to your recommendations. Here's our rebalance log - please don't pay attention to error messages after 9AM - this is when we manually destroyed volume to recreate it for further testing. Also all remove-brick operations you could see in the log were executed manually when recreating volume.
2018 Feb 07
0
Fwd: Troubleshooting glusterfs
Hello Nithya! Thank you for your help on figuring this out! We changed our configuration and after having a successful test yesterday we have run into new issue today. The test including moderate read/write (~20-30 Mb/s) and scaling the storage was running about 3 hours and at some moment system got stuck: On the user level there are such errors when trying to work with filesystem: OSError:
2013 Jun 17
0
gluster client timeouts / found conflict
Hi list Recently I've experienced more and more input/output errors from my most write heavy gluster filesystem. The logfile on the gluster servers show nothing, but the client(s) that get the input/output errors (and timeouts) will as far as I can tell get errors such as : [2013-06-14 15:55:56] W [fuse-bridge.c:493:fuse_entry_cbk] glusterfs-fuse: LOOKUP(/369/60702093) inode (ptr=0x1efd440,