Displaying 20 results from an estimated 10000 matches similar to: "4 vol raid5 segfault on device delete"
2011 Nov 23
2
stripe alignment consideration for btrfs on RAID5
Hiya,
is there any recommendation out there to setup a btrfs FS on top
of hardware or software raid5 or raid6 wrt stripe/stride alignment?
From mkfs.btrfs, it doesn''t look like there''s much that can be
adjusted that would help, and what I''m asking might not even
make sense for btrfs but I thought I''d just ask.
Thanks,
Stephane
--
To unsubscribe from this
2013 Feb 18
1
RAID5/6 Implementation - Understanding first
Chris and team, hats off on the RAID5/6 being at least experimental. I have been following your work for a year now, and waiting for these days.
I am trying to get my head rapped around the architecture for BTRFS before I jump in and start recommending code changes to the branch.
What I am trying to understand is the comments in the GIT commit which state:
Read/modify/write is done after the
2013 Sep 23
12
balance induced csum errors
SAMSUNG SSD 830 Series
CPU0: IntelĀ® Core(TM) i7-2820QM CPU @ 2.30GHz (fam: 06, model: 2a, stepping: 07)
8GB RAM (quite heavily tested, not recently, with several days of memtest)
kernel 3.11.1-200.fc19.x86_64 running on baremetal
btrfs-progs-0.20.rc1.20130308git704a08c-1.fc19.x86_64
Today I did a scrub on a btrfs volume, with no message or errors in console or dmesg or journal. Immediately after
2013 Aug 29
2
bug
I made a btrfs on five disks using RAID5 (-d raid5 for mount option). When a power failure occurs, I can not remount btrfs after my system reboots. Dmesg for remount is presented as following:
[ 192.713953] bio: create slab <bio-1> at 1
[ 192.716230] Btrfs loaded
[ 192.717177] device fsid a0dff7ea-9354-43fd-8516-0e17f370991d devid 1 transid 6 /dev/sdb
[ 192.717712] btrfs: disk space
2013 May 10
5
Btrfs balance invalid argument error
Hi list,
I am using kernel 3.9.0, btrfs-progs 0.20-rc1-253-g7854c8b.
I have a three disk array of level single:
# btrfs fi sh
Label: none uuid: 2e905f8f-e525-4114-afa6-cce48f77b629
Total devices 3 FS bytes used 3.80TB
devid 1 size 2.73TB used 2.25TB path /dev/sdd
devid 2 size 2.73TB used 1.55TB path /dev/sdc
devid 3 size 2.73TB used 0.00 path /dev/sdb
2011 Feb 17
7
Re: [Bugme-new] [Bug 29302] New: Null pointer dereference with large max_sectors_kb
(switched to email. Please respond via emailed reply-to-all, not via the
bugzilla web interface).
On Thu, 17 Feb 2011 13:20:20 GMT
bugzilla-daemon@bugzilla.kernel.org wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=29302
>
> Summary: Null pointer dereference with large max_sectors_kb
> Product: IO/Storage
> Version: 2.5
> Kernel
2013 Jun 03
3
csum failed during rebalance
Hi,
I added a new drive to an existing RAID 0 array. Every
attempt to rebalance the array fails:
# btrfs filesystem balance /share/bd8
ERROR: error during balancing ''/share/bd8'' - Input/output error
# dmesg | tail
btrfs: found 1 extents
btrfs: relocating block group 10752513540096 flags 1
btrfs: found 5 extents
btrfs: found 5 extents
btrfs: relocating block group 10751439798272
2013 Mar 09
4
[PATCH] use rcu_barrier() to wait for bdev puts at unmount
Doing this would reliably fail with -EBUSY for me:
# mount /dev/sdb2 /mnt/scratch; umount /mnt/scratch; mkfs.btrfs -f /dev/sdb2
...
unable to open /dev/sdb2: Device or resource busy
because mkfs.btrfs tries to open the device O_EXCL, and somebody still has it.
Using systemtap to track bdev gets & puts shows a kworker thread doing a
blkdev put after mkfs attempts a get; this is left over
2011 May 04
2
Cannot resize btrfs volume
Hello,
I added a new disk into our RAID5 array, it looks like this:
md2 : active raid5 sdd4[3] sde4[4] sda4[0] sdc4[2] sdb4[1]
3767274240 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
# btrfs fi sh
Label: none uuid: 5534d2e7-be31-49c7-8ab7-90c5ab8afe18
Total devices 1 FS bytes used 2.24TB
devid 3 size 2.63TB used 2.63TB path /dev/md2
# mount
...
/dev/md2 on /home type btrfs
2012 May 03
1
[PATCH] Btrfs: fix crash in scrub repair code when device is missing
Fix that when scrub tries to repair an I/O or checksum error and one of
the devices containing the mirror is missing, it crashes in bio_add_page
because the bdev is a NULL pointer for missing devices.
Reported-by: Marco L. Crociani <marco.crociani@gmail.com>
Signed-off-by: Stefan Behrens <sbehrens@giantdisaster.de>
---
fs/btrfs/scrub.c | 7 +++++++
1 file changed, 7 insertions(+)
2012 May 25
6
[PATCH v5 0/3] Btrfs: add IO error device stats
Changes v1-v2:
- Remove restriction that BTRFS_IOC_GET_DEVICE_STATS is a privileged
operation
- Cast u64 to unsigned long long for printf()
Changes v2-v3:
- Rebased on Chris'' current master
Changes v3-v4:
- Add padding at end of ioctl structure
Changes v4-v5:
- The statistic members in the ioctl are now organized as an array of
64 bit values. Symbolic names for the array indexes
2011 Dec 09
10
[PATCH 0/3] Btrfs: add IO error device stats
The goal is to detect when drives start to get an increased error rate,
when drives should be replaced soon. Therefore statistic counters are
added that count IO errors (read, write and flush). Additionally, the
software detected errors like checksum errors and corrupted blocks are
counted.
An ioctl interface is added to get the device statistic counters.
A second ioctl is added to atomically get
2013 Jun 16
1
btrfs balance resume + raid5/6
Greetings!
I''m testing raid6, and recently added two drives.
I haven''t been able to properly resume a balance operation: the number of
total chunks is always too low.
It seems that the balance starts and pauses properly, but always resumes
with ~7 chunks.
Here''s an example:
vendikar tim # uname -r
3.10.0-031000rc4-generic
vendikar tim # btrfs fi sho
Label:
2012 Jul 30
4
balance disables nodatacow
I have a 3 disk raid1 filesystem mounted with nodatacow. I have a
folder in said filesystem with the ''C'' NOCOW & ''Z'' Not_Compressed
flags set for good measure. I then copy in a large file and proceed to
make random modifications. Filefrag shows no additional extents
created, good so far. A big thank you to the those devs who got that
working.
However, after
2012 Feb 11
3
Hot data Tracking
What happened to the hot data tracking feature in btrfs? There are a lot
of old patches from aug 2010, but it looks like the feature has been
completly removed from the current version of btrfs. Is this feature
still on the roadmap?
2011 Feb 12
3
[PATCH] fix uncheck memory allocations
To make Btrfs code more robust, several return value checks where memory
allocation can fail are introduced. I use BUG_ON where I don''t know how
to handle the error properly, which increases the number of using the
notorious BUG_ON, though.
Signed-off-by: Yoshinori Sano <yoshinori.sano@gmail.com>
---
fs/btrfs/compression.c | 6 ++++++
fs/btrfs/extent-tree.c | 2 ++
2013 Oct 06
5
btrfs device delete problem
Hi,
I''m getting an error when trying to delete a device from a raid1 (data
and metadata mirrored).
> btrfs filesystem show
failed to read /dev/sr0
Label: none uuid: 78b5162b-489e-4de1-a989-a47b91adef50
Total devices 2 FS bytes used 107.64GB
devid 2 size 149.05GB used 109.01GB path /dev/sdh1
devid 1 size 156.81GB used 109.03GB path /dev/sdb6
Btrfs v0.20-rc1
>
2011 Jun 27
7
[btrfs-delalloc-]
Hello all.
What we have:
SL6 - kernel 2.6.32-131.2.1.el6.x86_64
btrfs on mdadm RAID5 with 8 HDD - 27T partition.
I see this at top:
1182 root 20 0 0 0 0 R 100.0 0.0 16:39.73
[btrfs-delalloc-]
And LA is grow. What is this and how can I fix it?
--
Best regards,
Proskurin Kirill
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of
2011 Dec 28
3
Btrfs: blocked for more than 120 seconds, made worse by 3.2 rc7
Hello all:
I have two machines with btrfs, that give me the "blocked for more than
120 seconds" message. After that I cannot write anything to disk, i am
unable to unmount the btrfs filesystem and i can only reboot with
sysrq-trigger.
It always happens when i write many files with rsync over network. When
i used 3.2rc6 it happened randomly on both machines after 50-500gb of
writes.
2009 Aug 05
3
RAID[56] with arbitrary numbers of "parity" stripes.
We discussed using the top bits of the chunk type field field to store a
number of redundant disks -- so instead of RAID5, RAID6, etc., we end up
with a single ''RAID56'' flag, and the amount of redundancy is stored
elsewhere.
This attempts it, but I hate it and don''t really want to do it. The type
field is designed as a bitmask, and _used_ as a bitmask in a number of