similar to: [RFC][PATCH 1/2] Btrfs: try to allocate new chunks with degenerated profile

Displaying 20 results from an estimated 6000 matches similar to: "[RFC][PATCH 1/2] Btrfs: try to allocate new chunks with degenerated profile"

2011 Nov 01
0
[PATCH] Btrfs-progs: change the way mkfs picks raid profiles
Currently mkfs in response to mkfs.btrfs -d raid10 dev1 dev2 instead of telling "you can''t do that" creates a SINGLE on two devices, and only rebalance can transform it to raid0. Generally, it never warns users about decisions it makes and it''s not at all obvious which profile it picks when. Fix this by checking the number of effective devices and reporting back
2009 Aug 05
3
RAID[56] with arbitrary numbers of "parity" stripes.
We discussed using the top bits of the chunk type field field to store a number of redundant disks -- so instead of RAID5, RAID6, etc., we end up with a single ''RAID56'' flag, and the amount of redundancy is stored elsewhere. This attempts it, but I hate it and don''t really want to do it. The type field is designed as a bitmask, and _used_ as a bitmask in a number of
2012 Jan 11
12
[PATCH 00/11] Btrfs: some patches for 3.3
The biggest one is a fix for fstrim, and there''s a fix for on-disk free space cache. Others are small fixes and cleanups. The last three have been sent weeks ago. The patchset is also available in this repo: git://repo.or.cz/linux-btrfs-devel.git for-chris Note there''s a small confict with Al Viro''s vfs changes. Li Zefan (11): Btrfs: add pinned extents to
2012 Mar 15
0
[PATCH] Btrfs: fix deadlock during allocating chunks
This deadlock comes from xfstests 251. We''ll hold the chunk_mutex throughout the whole of a chunk allocation. But if we find that we''ve used up system chunk space, we need to allocate a new system chunk, but this will lead to a recursion of chunk allocation and end up with a deadlock on chunk_mutex. So instead we need to allocate the system chunk first if we find we''re
2012 Oct 04
8
[PATCH][BTRFS-PROGS][V3] btrfs filesystem df
Hi Chris, this serie of patches updated the command "btrfs filesystem df". I update this command because it is not so easy to get the information about the disk usage from the command "fi df" and "fi show". This patch was the result of some discussions on the btrfs mailing list. Many thanks to all the contributors. From the man page (see 2nd patch): [...] The
2013 Mar 02
1
[PATCH] btrfs: return EPERM in btrfs_rm_device()
Currently there are error paths in btrfs_rm_device() where EINVAL is returned telling the user they passed an invalid argument even though they passed a valid device. Change to return EPERM instead as the operation is not permitted. Signed-off-by: Jerry Snitselaar <jerry.snitselaar@oracle.com> --- fs/btrfs/volumes.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git
2011 Aug 23
40
[PATCH 00/21] [RFC] Btrfs: restriper
Hello, This patch series adds an initial implementation of restriper (it''s a clever name for relocation framework that allows to do selective profile changing and selective balancing with some goodies like pausing/resuming and reporting progress to the user. Profile changing is global (per-FS) so far, per-subvolume profiles require some discussion and can be implemented in future.
2011 Aug 08
7
“bio too big” regression and silent data corruption in 3.0
tl;dr version: 3.0 produces “bio too big” dmesg entries and silently corrupts data in “meta-raid1/data-single” configurations on disks with different max_hw_sectors, where 2.6.38 worked fine. tl;dr side-issue: on-line removal of partitions holding “single” data attempts to create raid0 (rather than single) block groups. If it can''t get enough room for raid0 over all remaining disks, it
2012 Apr 08
4
[PATCH] Revert "Btrfs: increase the global block reserve estimates"
This reverts commit 5500cdbe14d7435e04f66ff3cfb8ecd8b8e44ebf. We had numerous reports of premature ENOSPC that were bisected to this patch. Reverting will not break things but a warning in ''use_block_rsv'' may show up in the syslog. There''s no alternative fix in sight and the ENOSPC problem affects all 3.3 btrfs users during normal filesystem use. CC:
2011 May 02
5
[PATCH v3 0/3] btrfs: quasi-round-robin for chunk allocation
In a multi device setup, the chunk allocator currently always allocates chunks on the devices in the same order. This leads to a very uneven distribution, especially with RAID1 or RAID10 and an uneven number of devices. This patch always sorts the devices before allocating, and allocates the stripes on the devices with the most available space, as long as there is enough space available. In a low
2011 Apr 12
3
[PATCH v2 0/3] btrfs: quasi-round-robin for chunk allocation
In a multi device setup, the chunk allocator currently always allocates chunks on the devices in the same order. This leads to a very uneven distribution, especially with RAID1 or RAID10 and an uneven number of devices. This patch always sorts the devices before allocating, and allocates the stripes on the devices with the most available space, as long as there is enough space available. In a low
2010 Jan 04
0
[RFC 03/12 RESEND PATCH] Btrfs: Reorder __btrfs_map_block to make code more efficient.
Allocate multi structure only after we know the correct size and do not do unneeded steps when we are only returning length. Signed-off-by: jim owens <jowens@hp.com> --- fs/btrfs/volumes.c | 65 +++++++++++++++++++-------------------------------- 1 files changed, 24 insertions(+), 41 deletions(-) diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index 5af76fc..e6599ef 100644 ---
2012 May 07
53
kernel 3.3.4 damages filesystem (?)
Hallo, "never change a running system" ... For some months I run btrfs unter kernel 3.2.5 and 3.2.9, without problems. Yesterday I compiled kernel 3.3.4, and this morning I started the machine with this kernel. There may be some ugly problems. Copying something into the btrfs "directory" worked well for some files, and then I got error messages (I''ve not
2012 Feb 03
10
[PATCH 0/3] Btrfs-progs: restriper interface
Hello, This is the userspace part of restriper, rebased onto the new progs infrastructure. Restriper commands are located under ''balance'' prefix, which is now the top level command group. However to not confuse existing users ''balance'' prefix is also available under ''filesystem'': btrfs [filesystem] balance start btrfs [filesystem] balance
2013 Jun 26
1
some feedbacks seen on btrfs
First off, thanks for an awesome file system, it is working well for my purposes of compressing a filesystem on a small VPS. Woot! I thought I''d call out a few things (in the hopes of spurring improvements) I''d seen about btrfs (in case they weren''t common knowledge...):
2012 Oct 25
46
[RFC] New attempt to a better "btrfs fi df"
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi all, this is a new attempt to improve the output of the command "btrfs fi df". The previous attempt received a good reception. However there was no a general consensus about the wording. Moreover I still didn''t understand how btrfs was using the disks. A my first attempt was to develop a new command which shows how the disks
2011 Jan 12
1
Filesystem creation in "degraded mode"
I''ve had a go at determining exactly what happens when you create a filesystem without enough devices to meet the requested replication strategy: # mkfs.btrfs -m raid1 -d raid1 /dev/vdb # mount /dev/vdb /mnt # btrfs fi df /mnt Data: total=8.00MB, used=0.00 System, DUP: total=8.00MB, used=4.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=153.56MB, used=24.00KB Metadata:
2013 Oct 04
1
btrfs raid0
How can I verify the read speed of a btrfs raid0 pair in archlinux.? I assume raid0 means striped activity in a paralleled mode at lease similar to raid0 in mdadm. How can I measure the btrfs read speed since it is copy-on-write which is not the norm in mdadm raid0.? Perhaps I cannot use the same approach in btrfs to determine the performance. Secondly, I see a methodology for raid10 using
2009 Nov 19
10
Unable to mount loopback devices in RAID mode
Hi! I recently tried to mount a filesystem in RAID1 mode using loopback devices. I followed the instructions at [1]. Here''s exactly what I''ve done: $ dd if=/dev/zero of=raid1_0.img bs=1M count=500 $ dd if=/dev/zero of=raid1_1.img bs=1M count=500 $ mkfs.btrfs -m raid1 -d raid1 raid1_0.img raid1_1.img $ losetup /dev/loop0 raid1_0.img $ losetup /dev/loop1 raid1_1.img $ mount -t
2017 Oct 17
1
lvconvert(split) - raid10 => raid0
hi guys, gals do you know if conversion from lvm's raid10 to raid0 is possible? I'm fiddling with --splitmirrors but it gets me nowhere. On "takeover" subject man pages says: "..between striped/raid0 and raid10."" but no details, nowhere I could find documentation, nor a howto. many thanks, L.