similar to: Creating striped replicated volume

Displaying 20 results from an estimated 10000 matches similar to: "Creating striped replicated volume"

2011 Dec 08
1
Can't create striped replicated volume
Hi, I'm trying to create striped replicated volume but getting tis error: gluster volume create cloud stripe 4 replica 2 transport tcp nebula1:/dataPool nebula2:/dataPool nebula3:/dataPool nebula4:/dataPool wrong brick type: replica, use<HOSTNAME>:<export-dir-abs-path> Usage: volume create<NEW-VOLNAME> [stripe<COUNT>] [replica<COUNT>]
2012 Mar 16
1
replicated-striped volume growing question
Hy, I have the following question: If I build a replicated-striped volume (one replica and one stripe) when I want to grow that volume I can grow it adding one brick and its replica or I have to add the stripe and Its replica also? Hope you can help me, thanks in advance Juan Brenes Imprima este correo solo si es necesario. Act?e responsablemente con el Medio Ambiente.
2013 Apr 30
1
3.3.1 distributed-striped-replicated volume
I'm hitting the 'cannot find stripe size' bug: [2013-04-29 17:42:24.508332] E [stripe-helpers.c:268:stripe_ctx_handle] 0-gv0-stripe-0: Failed to get stripe-size [2013-04-29 17:42:24.513013] W [fuse-bridge.c:968:fuse_err_cbk] 0-glusterfs-fuse: 867: FSYNC() ERR => -1 (Invalid argument) Is there a fix for this in 3.3.1 or do we need to move to git HEAD to make this work? M. --
2013 Sep 05
1
NFS cann't use by esxi with Striped Volume
After some test , I confirm that Esxi cann't use Striped-Replicate volume on Glusterfs's nfs. But could success on Distributed-Replicate . Anyone know how or why ? 2013/9/5 higkoohk <higkoohk at gmail.com> > Thanks Vijay ! > > It run success after 'volume set images-stripe nfs.nlm off'. > > Now I can use Esxi with Glusterfs's nfs export . > > Many
2012 Oct 23
1
Problems with striped-replicated volumes on 3.3.1
Good afternoon, I am playing around with GlusterFS 3.1 in CentOS 6 virtual machines to see if I can get of proof of concept for a bigger project. In my setup, I have 4 GlusterFS servers with two bricks each of 10GB with XFS (per your quick-start guide). So, I have a total of 8 bricks. When bu I have no problem with distributed-replicated volumes. However, when I set up a striped replicated
2012 Feb 23
1
default cluster.stripe-block-size for striped volumes on 3.0.x vs 3.3 beta (128kb), performance change if i reduce to a smaller block size?
Hi, I've been migrating data from an old striped 3.0.x gluster install to a 3.3 beta install. I copied all the data to a regular XFS partition (4K blocksize) from the old gluster striped volume and it totaled 9.2TB. With the old setup I used the following option in a "volume stripe" block in the configuration file in a client : volume stripe type cluster/stripe option
2010 Mar 02
2
crash when using the cp command to copy files off a striped gluster dir but not when using rsync
Hi, I've got this strange problem where a striped endpoint will crash when I try to use cp to copy files off of it but not when I use rsync to copy files off: [user at gluster5 user]$ cp -r Python-2.6.4/ ~/tmp/ cp: reading `Python-2.6.4/Lib/lib2to3/tests/data/fixers/myfixes/__init__.py': Software caused connection abort cp: closing
2017 Aug 14
2
Is transport=rdma tested with "stripe"?
Hi, I have 2 servers with Mellanox InfiniBand FDR hardware/software installed. A volume with "replica 2 transport rdma" works (create on servers, mount and test on clients) ok. A volume with "stripe 2 transport tcp" works ok, too. A volume with "stripe 2 transport rdma" created ok, and mounted ok on a client, but writing a file caused "endpoint not
2017 Aug 14
0
Is transport=rdma tested with "stripe"?
Forgot to mention that I was using CentOS7.3 and GlusterFS 3.10.3 that is the latest available. From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Hatazaki, Takao Sent: Tuesday, August 15, 2017 2:32 AM To: gluster-users at gluster.org Subject: [Gluster-users] Is transport=rdma tested with "stripe"? Hi, I have 2 servers with Mellanox
2017 Jun 30
2
Very slow performance on Sharded GlusterFS
Hi, I have an 2 nodes with 20 bricks in total (10+10). First test: 2 Nodes with Distributed - Striped - Replicated (2 x 2) 10GbE Speed between nodes "dd" performance: 400mb/s and higher Downloading a large file from internet and directly to the gluster: 250-300mb/s Now same test without Stripe but with sharding. This results are same when I set shard size 4MB or
2013 Jan 25
1
Striped Replicated Volumes: create files error.
Hi there, each time I copy (or dd or similar) a file to a striped replicated volume I get an error: the argument is not valid. An empty file is created. If I now run the copy, it works. This is in independed of the client platform. We are using version 3.3.1 Mit freundlichen Gr??en / Kind regards Axel Weber -------------- next part -------------- An HTML attachment was scrubbed... URL:
2017 Jun 30
3
Very slow performance on Sharded GlusterFS
I already tried 512MB but re-try again now and results are the same. Both without tuning; Stripe 2 replica 2: dd performs 250~ mb/s but shard gives 77mb. I attached two logs (shard and stripe logs) Note: I also noticed that you said ?order?. Do you mean when we create via volume set we have to make an order for bricks? I thought gluster handles (and do the math) itself. Gencer
2017 Jun 30
1
Very slow performance on Sharded GlusterFS
Hi, I have an 2 nodes with 20 bricks in total (10+10). First test: 2 Nodes with Distributed - Striped - Replicated (2 x 2) 10GbE Speed between nodes "dd" performance: 400mb/s and higher Downloading a large file from internet and directly to the gluster: 250-300mb/s Now same test without Stripe but with sharding. This results are same when I set shard size 4MB or
2017 Aug 23
1
Brick count limit in a volume
This is the command line output: Total brick list is larger than a request. Can take (brick_count 4444) Usage: volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT>] .... I am testing if a big single volume will work for us. Now I am continuing testing with three volumes each 13PB...
2017 Jun 30
3
Very slow performance on Sharded GlusterFS
Hi Krutika, Sure, here is volume info: root at sr-09-loc-50-14-18:/# gluster volume info testvol Volume Name: testvol Type: Distributed-Replicate Volume ID: 30426017-59d5-4091-b6bc-279a905b704a Status: Started Snapshot Count: 0 Number of Bricks: 10 x 2 = 20 Transport-type: tcp Bricks: Brick1: sr-09-loc-50-14-18:/bricks/brick1 Brick2: sr-09-loc-50-14-18:/bricks/brick2 Brick3:
2017 Jul 04
2
Very slow performance on Sharded GlusterFS
Hi Gencer, I just checked the volume-profile attachments. Things that seem really odd to me as far as the sharded volume is concerned: 1. Only the replica pair having bricks 5 and 6 on both nodes 09 and 10 seems to have witnessed all the IO. No other bricks witnessed any write operations. This is unacceptable for a volume that has 8 other replica sets. Why didn't the shards get distributed
2017 Jun 30
1
Very slow performance on Sharded GlusterFS
Just noticed that the way you have configured your brick order during volume-create makes both replicas of every set reside on the same machine. That apart, do you see any difference if you change shard-block-size to 512MB? Could you try that? If it doesn't help, could you share the volume-profile output for both the tests (separate)? Here's what you do: 1. Start profile before starting
2017 Jul 03
0
Very slow performance on Sharded GlusterFS
Hi Krutika, Have you be able to look out my profiles? Do you have any clue, idea or suggestion? Thanks, -Gencer From: Krutika Dhananjay [mailto:kdhananj at redhat.com] Sent: Friday, June 30, 2017 3:50 PM To: gencer at gencgiyen.com Cc: gluster-user <gluster-users at gluster.org> Subject: Re: [Gluster-users] Very slow performance on Sharded GlusterFS Just noticed that the
2010 Oct 22
2
cannot create volume
hi all guys! I'm using glusterfs 3.10 now .I got 10 node to run gluster.I found a problem here: [root at gluster-bak-1 /root] #gluster volume create db-backup stripe 4 transport tcp gluster-bak-3:/data3 gluster-bak-4:/data4 gluster-bak-5:/data5 gluster-bak-6:/data6 Creation of volume db-backup has been unsuccessful the volume cannot be created! and I found the log in
2017 Jun 30
0
Very slow performance on Sharded GlusterFS
Could you please provide the volume-info output? -Krutika On Fri, Jun 30, 2017 at 4:23 PM, <gencer at gencgiyen.com> wrote: > Hi, > > > > I have an 2 nodes with 20 bricks in total (10+10). > > > > First test: > > > > 2 Nodes with Distributed ? Striped ? Replicated (2 x 2) > > 10GbE Speed between nodes > > > > ?dd? performance: