search for: stripes

Displaying 20 results from an estimated 937 matches for "stripes".

Did you mean: stripe
2012 Feb 23
1
default cluster.stripe-block-size for striped volumes on 3.0.x vs 3.3 beta (128kb), performance change if i reduce to a smaller block size?
Hi, I've been migrating data from an old striped 3.0.x gluster install to a 3.3 beta install. I copied all the data to a regular XFS partition (4K blocksize) from the old gluster striped volume and it totaled 9.2TB. With the old setup I used the following option in a "volume stripe" block in the configuration file in a client : volume stripe type cluster/stripe option
2010 Mar 02
2
crash when using the cp command to copy files off a striped gluster dir but not when using rsync
Hi, I've got this strange problem where a striped endpoint will crash when I try to use cp to copy files off of it but not when I use rsync to copy files off: [user at gluster5 user]$ cp -r Python-2.6.4/ ~/tmp/ cp: reading `Python-2.6.4/Lib/lib2to3/tests/data/fixers/myfixes/__init__.py': Software caused connection abort cp: closing
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi, I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping. Is this possibile with ZFS? As far as I understood, if I use zpool create myPool lun-1 lun-2 ... lun-n I will get a RAID0 striping where each data block is split across all "n" LUNs. If that''s
2007 Jul 07
17
Raid-Z expansion
Apologies for the blank message (if it came through). I have heard here and there that there might be in development a plan to make it such that a raid-z can grow its "raid-z''ness" to accommodate a new disk added to it. Example: I have 4Disks in a raid-z[12] configuration. I am uncomfortably low on space, and would like to add a 5th disk. The idea is to pop in disk 5 and have
2011 Dec 08
1
Can't create striped replicated volume
Hi, I'm trying to create striped replicated volume but getting tis error: gluster volume create cloud stripe 4 replica 2 transport tcp nebula1:/dataPool nebula2:/dataPool nebula3:/dataPool nebula4:/dataPool wrong brick type: replica, use<HOSTNAME>:<export-dir-abs-path> Usage: volume create<NEW-VOLNAME> [stripe<COUNT>] [replica<COUNT>]
2015 Dec 24
4
[PATCH] btrfs: Fix logical to physical block address mapping
The current btrfs support did not handled multiple stripes stored in chunk items, hence skipping the physical addresses that were needed to do the mapping. Besides, the chunk tree may contain DEV_ITEM keys which store information on all of the underlying block devices, so we must skip them instead of finishing lookup. The bug was reproduced with btrfs-pr...
2013 Apr 30
1
3.3.1 distributed-striped-replicated volume
I'm hitting the 'cannot find stripe size' bug: [2013-04-29 17:42:24.508332] E [stripe-helpers.c:268:stripe_ctx_handle] 0-gv0-stripe-0: Failed to get stripe-size [2013-04-29 17:42:24.513013] W [fuse-bridge.c:968:fuse_err_cbk] 0-glusterfs-fuse: 867: FSYNC() ERR => -1 (Invalid argument) Is there a fix for this in 3.3.1 or do we need to move to git HEAD to make this work? M. --
2009 Oct 30
3
Stripe vs Cylinder alignement...
Hi, I modified my kickstart to do some custom partioning and formating in a pre-install script. I am trying to align the partitions on the RAID stripe (and format with a correct stride). But, sfdisk complains that it does not start/end on a cylinder boundary (used -L option to limit complaining). Since the cylinder size is not a multiple of the stripe size, I cannot align on both. I tried to
2010 Apr 19
4
upgrade zfs stripe
hi there, since i am really new to zfs, i got 2 important questions for starting. i got a nas up and running zfs in stripe mode with 2x 1,5tb hdd. my question for future proof would be, if i could add just another drive to the pool and zfs can integrate it flawlessly? and second if this hdd could also be another size than 1,5tb? so could i put in 2tb also and integrate it? thanks in advance
2012 Oct 23
1
Problems with striped-replicated volumes on 3.3.1
...-start guide). So, I have a total of 8 bricks. When bu I have no problem with distributed-replicated volumes. However, when I set up a striped replicated volume and mounted it via the native client, I had problems with file operations. My striped-replicated volume had 8 bricks. I set it up with 4 stripes and 2 replicas. I made sure that no bricks on the same server were in pairs next to each other. The volume created fine and started fine. "gluster volume info" and "gluster volume status" showed no problems. The volume mounted fine. I tried to run "dd if=/dev/zero...&quot...
2012 Nov 06
2
I am very confused about strip Stripe what way it hold space?
I have 4 dell 2970 server , three server harddisk is 146Gx6 ,one hard disk is 72Gx6: each server mount info is /dev/sda4 on /exp1 type xfs (rw) /dev/sdb1 on /exp2 type xfs (rw) /dev/sdc1 on /exp3 type xfs (rw) /dev/sdd1 on /exp4 type xfs (rw) /dev/sde1 on /exp5 type xfs (rw) /dev/sdf1 on /exp6 type xfs (rw) I create a gluster volume have 4 stripe gluster volume create test-volume3 stripe 4
2012 Mar 16
1
replicated-striped volume growing question
Hy, I have the following question: If I build a replicated-striped volume (one replica and one stripe) when I want to grow that volume I can grow it adding one brick and its replica or I have to add the stripe and Its replica also? Hope you can help me, thanks in advance Juan Brenes Imprima este correo solo si es necesario. Act?e responsablemente con el Medio Ambiente.
2017 Aug 14
2
Is transport=rdma tested with "stripe"?
Hi, I have 2 servers with Mellanox InfiniBand FDR hardware/software installed. A volume with "replica 2 transport rdma" works (create on servers, mount and test on clients) ok. A volume with "stripe 2 transport tcp" works ok, too. A volume with "stripe 2 transport rdma" created ok, and mounted ok on a client, but writing a file caused "endpoint not
2006 May 26
6
test
test just testing please ignore... -- "I have learned that you should''nt compare yourself to others - they are more screwed up than you think." ...unknown "In the 60''s, people took acid to make the world weird. Now the world is weird and people take Prozac to make it normal." ..unknown _____________________________ Terry Remsik stripe-man.dyndns.org
2011 Dec 01
2
Creating striped replicated volume
We are having trouble creating a stripe 2 replica 2 volume across 4 hosts: user at gluster-fs-host-0:/gfsr$ sudo gluster volume create sr stripe 2 replica 2 glusterfs-host-0:/gfsr glusterfs-host-1:/gfsr glusterfs-host-2:/gfsr glusterfs-host-3:/gfsr wrong brick type: replica, use <HOSTNAME>:<export-dir-abs-path> We are on glusterfs 3.2.5
2013 May 23
11
raid6: rmw writes all the time?
Hi all, we got a new test system here and I just also tested btrfs raid6 on that. Write performance is slightly lower than hw-raid (LSI megasas) and md-raid6, but it probably would be much better than any of these two, if it wouldn''t read all the during the writes. Is this a known issue? This is with linux-3.9.2. Thanks, Bernd -- To unsubscribe from this list: send the line
2015 Dec 24
0
[PATCH v2] btrfs: Fix logical to physical block address mapping
The current btrfs support did not handled multiple stripes stored in chunk items, hence skipping the physical addresses that were needed to do the mapping. Besides, the chunk tree may contain DEV_ITEM keys which store information on all of the underlying block devices, so we must skip them instead of finishing lookup. The bug was reproduced with btrfs-pr...
2015 Dec 27
0
[PATCH v3] btrfs: Fix logical to physical block address mapping
The current btrfs support did not handled multiple stripes stored in chunk items, hence skipping the physical addresses that were needed to do the mapping. Besides, the chunk tree may contain DEV_ITEM keys which store information on all of the underlying block devices, so we must skip them instead of finishing lookup. The bug was reproduced with btrfs-pr...
2011 Feb 24
0
No subject
which is a stripe of the gluster storage servers, this is the performance I get (note use a file size > amount of RAM on client and server systems, 13GB in this case) : 4k block size : 111 pir4:/pirstripe% /sb/admin/scripts/nfsSpeedTest -s 13g -y pir4: Write test (dd): 142.281 MB/s 1138.247 mbps 93.561 seconds pir4: Read test (dd): 274.321 MB/s 2194.570 mbps 48.527 seconds testing from 8k -
2007 Nov 26
15
bad 1.6.3 striped write performance
...of days ago) are also terrible. the below shows that the OS (centos4.5/5) or fabric (gigE/IB) or lustre version on the servers doesn''t matter - the problem is with the 1.6.3 and 1.6.4rc3 client kernels with striped writes (although un-striped writes are a tad slower too). with 1M lustre stripes: client client dd write speed (MB/s) OS kernel a) b) c) d) 1.6.2: centos4.5 2.6.9-55.0.2.EL_lustre.1.6.2smp 202 270 118 117 centos5 2.6.18-8.1.8.el5_lustre.1.6.2rjh 166 190 117 119 1.6.3+...