search for: stripesize

Displaying 19 results from an estimated 19 matches for "stripesize".

Did you mean: stripe_size
2012 Aug 16
2
Geom label lost after expanding partition
I have a GPT formatted disk where I recently expanded the size of a partition. I used "gpart resize -i 6 ada1" first to expand the partition to use the remaining free space and then growfs to modify the FFS file system to use the full partition. This was all done in single-user mode, of course, but when I enter "exit" to bring the system up, it failed to mount /usr. This was
2007 Nov 29
1
lvresize --resizefs
...efs function. Centos4. [root at serv01 ~]# lvresize Please specify either size or extents (not both) lvresize: Resize a logical volume lvresize [-A|--autobackup y|n] [--alloc AllocationPolicy] [-d|--debug] [-h|--help] [-i|--stripes Stripes [-I|--stripesize StripeSize]] {-l|--extents [+|-]LogicalExtentsNumber[%{VG|LV|FREE}] | -L|--size [+|-]LogicalVolumeSize[kKmMgGtTpPeE]} [-n|--nofsck] [-r|--resizefs] [-t|--test] [--type VolumeType] [-v|--verbose] [--version] LogicalVol...
2012 Apr 20
1
GEOM_PART: integrity check failed (mirror/gm0, MBR) on FreeBSD 8.3-RELEASE
...to eliminate this warning or is it safe to ignore please? sudo gpart list: Geom name: mirror/gm0 modified: false state: CORRUPT fwheads: 255 fwsectors: 63 last: 976773166 first: 63 entries: 4 scheme: MBR Providers: 1. Name: mirror/gm0s1 Mediasize: 500107829760 (465G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 32256 Mode: r2w2e3 attrib: active rawtype: 165 length: 500107829760 offset: 32256 type: freebsd index: 1 end: 976773167 start: 63 Consumers: 1. Name: mirror/gm0 Mediasize: 500107861504 (465G) Sectorsize: 512 Mode: r2w2e5 Geom nam...
2007 Nov 26
15
bad 1.6.3 striped write performance
Hi, I''m seeing what can only be described as dismal striped write performance from lustre 1.6.3 clients :-/ 1.6.2 and 1.6.1 clients are fine. 1.6.4rc3 clients (from cvs a couple of days ago) are also terrible. the below shows that the OS (centos4.5/5) or fabric (gigE/IB) or lustre version on the servers doesn''t matter - the problem is with the 1.6.3 and 1.6.4rc3 client kernels
2012 Jan 10
0
[PATCH V2] Btrfs: cleanup: move node-,leaf-,sectorsize to fs_info
...oot to btrfs_fs_info since we don''t intend to allow different sizes between trees also removed sectorsize from btrfs_block_group_cache because it now can use the one in fs_info updated all uses accordingly please note in disk-io.c: -static int __setup_root(nodesize, leafsize, sectorsize, stripesize, - *root, *fs_info, objectid) +static int __setup_root(stripesize, *root, *fs_info, objectid) Signed-off-by: Simon Peeters <peeters.simon@gmail.com> --- fs/btrfs/backref.c | 2 +- fs/btrfs/compression.c | 8 +++--- fs/btrfs/ctree.c...
2017 Nov 04
3
using LVM thin pool LVs as a storage for libvirt guest
...L 1024K --type snapshot --virtualsize 1048576K storage) unexpected exit status 5: Volume group "storage" has insufficient free space (0 extents): 1 required. When I create thin volume manually, I do not see it: # lvcreate -n big -V 500G --thinpool storage/lvol1 Using default stripesize 64.00 KiB. WARNING: Sum of all thin volume sizes (500.00 GiB) exceeds the size of thin pool storage/lvol1 and the size of whole volume group (267.93 GiB)! For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100. Logical volume "big" cre...
2009 Nov 12
0
[PATCH 05/12] Btrfs: Avoid orphan inodes cleanup during replaying log
...defrag_running; - int defrag_level; char *name; int in_sysfs; diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 0cf1781..e2ebc47 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -894,6 +894,8 @@ static int __setup_root(u32 nodesize, u32 leafsize, u32 sectorsize, root->stripesize = stripesize; root->ref_cows = 0; root->track_dirty = 0; + root->in_radix = 0; + root->clean_orphans = 0; root->fs_info = fs_info; root->objectid = objectid; @@ -930,7 +932,6 @@ static int __setup_root(u32 nodesize, u32 leafsize, u32 sectorsize, root->defrag_trans_s...
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
...90000-PRISTINE-.usr.src.linux-2.6.9-67.0.4.EL-Lustre-1.6.4.2 Lustre: Added LNI 36.122.255.1 at o2ib [8/64] Lustre: Added LNI 36.121.255.1 at tcp [8/256] Lustre: Accept secure, port 988 Lustre: Lustre Client File System; info at clusterfs.com Lustre: ddnlfs-clilov-000001042f8b7c00.lov: set parameter stripesize=2M Lustre: Client ddnlfs-client has started Can I be certain it''ll use IB for LFS on this client? Thanks, Chris > > Cheers, > Craig > > > > > Chris Worley wrote: > > More issues. Now, on the clients. > > > > The MDT/MGS/OST''s are...
2013 May 23
11
raid6: rmw writes all the time?
Hi all, we got a new test system here and I just also tested btrfs raid6 on that. Write performance is slightly lower than hw-raid (LSI megasas) and md-raid6, but it probably would be much better than any of these two, if it wouldn''t read all the during the writes. Is this a known issue? This is with linux-3.9.2. Thanks, Bernd -- To unsubscribe from this list: send the line
2010 Apr 03
1
[PATCH] btrfs support
...help find the new super based on the log root */ + __le64 log_root_transid; + __le64 total_bytes; + __le64 bytes_used; + __le64 root_dir_objectid; + __le64 num_devices; + __le32 sectorsize; + __le32 nodesize; + __le32 leafsize; + __le32 stripesize; + __le32 sys_chunk_array_size; + __le64 chunk_root_generation; + __le64 compat_flags; + __le64 compat_ro_flags; + __le64 incompat_flags; + __le16 csum_type; + __u8 root_level; + __u8 chunk_root_level; + __u8 log_root_level; + /* trunca...
2017 Nov 07
0
Re: using LVM thin pool LVs as a storage for libvirt guest
...alsize 1048576K storage) unexpected exit status > 5: Volume group "storage" has insufficient free space (0 extents): 1 > required. > > When I create thin volume manually, I do not see it: > > # lvcreate -n big -V 500G --thinpool storage/lvol1 > Using default stripesize 64.00 KiB. > WARNING: Sum of all thin volume sizes (500.00 GiB) exceeds the size of > thin pool storage/lvol1 and the size of whole volume group (267.93 GiB)! > For thin pool auto extension activation/thin_pool_autoextend_threshold > should be below 100. > Logical volu...
2012 Mar 20
13
[PATCH 0 of 3 v2] PV-GRUB: add support for ext4 and btrfs
Hi, The following patches add support for ext4 and btrfs to PV-GRUB. These patches are taken nearly verbatim from those provided by Fedora and Gentoo. We''ve been using these patches for the PV-GRUB images available in EC2 for some time now with no problems. Changes from v1: - Makefile has been changed to check the exit code from patch - The btrfs patch has been rebased to apply
2017 Nov 07
1
Re: using LVM thin pool LVs as a storage for libvirt guest
...xpected exit status >> 5: Volume group "storage" has insufficient free space (0 extents): 1 >> required. >> >> When I create thin volume manually, I do not see it: >> >> # lvcreate -n big -V 500G --thinpool storage/lvol1 >> Using default stripesize 64.00 KiB. >> WARNING: Sum of all thin volume sizes (500.00 GiB) exceeds the size of >> thin pool storage/lvol1 and the size of whole volume group (267.93 GiB)! >> For thin pool auto extension activation/thin_pool_autoextend_threshold >> should be below 100. >&g...
2013 Jun 19
3
shutdown -r / shutdown -h / reboot all hang and don't cleanly dismount
Hello -STABLE@, So I've seen this situation seemingly randomly on a number of both physical 9.1 boxes as well as VMs for I would say 6-9 months at least. I finally have a physical box here that reproduces it consistently that I can reboot easily (ie; not a production/client server). No matter what I do: reboot shutdown -p shutdown -r This specific server will stop at "All buffers
2011 Aug 26
0
[PATCH] Btrfs: make some functions return void
...c int btrfs_destroy_marked_extents(struct btrfs_root *root, struct extent_io_tree *dirty_pages, @@ -1056,10 +1056,10 @@ int clean_tree_block(struct btrfs_trans_handle *trans, struct btrfs_root *root, return 0; } -static int __setup_root(u32 nodesize, u32 leafsize, u32 sectorsize, - u32 stripesize, struct btrfs_root *root, - struct btrfs_fs_info *fs_info, - u64 objectid) +static void __setup_root(u32 nodesize, u32 leafsize, u32 sectorsize, + u32 stripesize, struct btrfs_root *root, + struct btrfs_fs_info *fs_info, + u64 objectid) { root->node = NULL; root->commit_root...
2012 Feb 23
1
default cluster.stripe-block-size for striped volumes on 3.0.x vs 3.3 beta (128kb), performance change if i reduce to a smaller block size?
Hi, I've been migrating data from an old striped 3.0.x gluster install to a 3.3 beta install. I copied all the data to a regular XFS partition (4K blocksize) from the old gluster striped volume and it totaled 9.2TB. With the old setup I used the following option in a "volume stripe" block in the configuration file in a client : volume stripe type cluster/stripe option
2013 Mar 25
2
gptzfsboot: error 4 lba 30
...roller not jbod capable) - freebsd 9.1 REL (same error message with 9-STABLE from 2013-03-24) - server is zfs-only # diskinfo -v da0 da0 512 # sectorsize 146778685440 # mediasize in bytes (136G) 286677120 # mediasize in sectors 0 # stripesize 0 # stripeoffset 35132 # Cylinders according to firmware. 255 # Heads according to firmware. 32 # Sectors according to firmware. P61620D9SUP9ZS # Disk ident. # gpart show => 34 286677053 da0 GPT...
2011 Oct 06
26
[PATCH v0 00/18] btfs: Subvolume Quota Groups
This is a first draft of a subvolume quota implementation. It is possible to limit subvolumes and any group of subvolumes and also to track the amount of space that will get freed when deleting snapshots. The current version is functionally incomplete, with the main missing feature being the initial scan and rescan of an existing filesystem. I put some effort into writing an introduction into
2011 Oct 04
68
[patch 00/65] Error handling patchset v3
Hi all - Here''s my current error handling patchset, against 3.1-rc8. Almost all of this patchset is preparing for actual error handling. Before we start in on that work, I''m trying to reduce the surface we need to worry about. It turns out that there is a ton of code that returns an error code but never actually reports an error. The patchset has grown to 65 patches. 46 of them