Displaying 20 results from an estimated 20000 matches similar to: "[Question] How to restore btrfs raid0 image file?"
2013 Oct 04
1
btrfs raid0
How can I verify the read speed of a btrfs raid0 pair in archlinux.?
I assume raid0 means striped activity in a paralleled mode at lease
similar to raid0 in mdadm.
How can I measure the btrfs read speed since it is copy-on-write which
is not the norm in mdadm raid0.?
Perhaps I cannot use the same approach in btrfs to determine the
performance.
Secondly, I see a methodology for raid10 using
2012 Mar 23
2
btrfs crash after disk reconnect
Observed on Linux 3.2.9 after the controller/disk flaked in-out.
(The world still needs a SCSI error decoding tool to tell normal people
what cmd and res are about.)
[ 157.732885] device label srv devid 4 transid 11292 /dev/sdf
[ 157.733201] btrfs: disk space caching is enabled
[ 172.936515] device label srv devid 4 transid 11292 /dev/sdf
[44106.091461] ata4.01: exception Emask 0x0 SAct 0x0
2010 Nov 02
0
raid0 corruption, how to restore?
I have two disks that I formatted as btrfs RAID0 on opensuse 11.3. The
raid worked well several days until there was a power surge. The system
successfully rebooted and the btrfs raid reappeared, but the kernel
occasionally threw oops. That was my first experience with oops. After
two days, the btrfs raid failed to mount via fstab and when I manually
tried to mount it, there was a kernel
2011 Feb 05
2
Strangeness on btrfs balance..
Hi there...
I have kernel version 2.6.36.3, compiled with gcc 4.4.5, btrfstools
version 0.19+20101101
I have a btrfs filesystem (/data) consisting of two 1TB hard disks, raid0.
I added in another 1TB hard drive.
root@X86-64:~# btrfs filesystem show
failed to read /dev/sdh
failed to read /dev/sdg
failed to read /dev/sdf
failed to read /dev/sde
failed to read /dev/sr0
failed to read /dev/fd0u800
2010 Oct 28
0
RAID0 limiting disk utilization
I noticed that if I have single-device allocation for data in a
multi-device btrfs filesystem, a balance operation will convert the data
to RAID0. This is true even if ''-d single'' is specified explicitly when
creating the filesystem. Then it wants to continue using RAID0 for
future data allocations, and I run out of space once there''s no longer
two drives with space
2012 Jul 07
0
block rsv returned -28
- RAID10 btrfs volume consisting of 4 disks.
- One disk failed, was replaced, resync started
(`btrfs dev add /dev/sdf /srv; btrfs dev del missing /srv`)
- Another disk failed before resync was done.
Disk was replaced, resync restarted.
(`btrfs dev add /dev/sdc /srv; btrfs dev del missing /srv`)
Naturally I don''t expect it to recover from 2 failures, but
doing the attempt was
2013 Apr 03
2
[bug] btrfs fi df doesn't show raid type after balance
Did something break.. ? we are not reporting raid type after balance.
-----------
# btrfs fi df /btrfs
Data, RAID0: total=2.00GB, used=2.03MB
Data: total=8.00MB, used=0.00
System, RAID0: total=16.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID0: total=2.00GB, used=216.00KB
Metadata: total=8.00MB, used=4.00KB
# btrfs bal /btrfs
Done, had to relocate 5 out of 5 chunks
# btrfs fi
2013 Mar 15
0
[PATCH] btrfs-progs: mkfs: add missing raid5/6 description
Signed-off-by: Matias Bjørling <m@bjorling.me>
---
man/mkfs.btrfs.8.in | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/man/mkfs.btrfs.8.in b/man/mkfs.btrfs.8.in
index 41163e0..db8c57c 100644
--- a/man/mkfs.btrfs.8.in
+++ b/man/mkfs.btrfs.8.in
@@ -37,7 +37,7 @@ mkfs.btrfs uses all the available storage for the filesystem.
.TP
\fB\-d\fR, \fB\-\-data
2012 Jun 08
2
btrfs filesystems can only be mounted after an unclean shutdown if btrfsck is run and immediately killed!
Hi all,
I have two multi-disk btrfs filesystems on a Arch linux 3.4.0 system.
After a power failure, both filesystems refuse to mount
[ 10.402284] Btrfs loaded
[ 10.402714] device fsid 1e7c18a4-02d6-44b1-8eaf-c01378009cd3 devid 4
transid 65282 /dev/sdc
[ 10.403108] btrfs: force zlib compression
[ 10.403130] btrfs: enabling inode map caching
[ 10.403152] btrfs: disk space caching is
2009 Jan 13
1
[btrfs-progs 1/4] Add man/mkfs.btrfs.8.in
Add man/mkfs.btrfs.8.in
Kept the name with the name in, so that further processing such as
BUILD_DATE BUILD_VERSION etc. could be included later.
All man pages included in the man directory to avoid file cluttering.
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.de>
---
man/mkfs.btrfs.8.in | 63 +++++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 63 insertions(+), 0
2011 Nov 01
0
[PATCH] Btrfs-progs: change the way mkfs picks raid profiles
Currently mkfs in response to
mkfs.btrfs -d raid10 dev1 dev2
instead of telling "you can''t do that" creates a SINGLE on two devices,
and only rebalance can transform it to raid0. Generally, it never warns
users about decisions it makes and it''s not at all obvious which profile
it picks when.
Fix this by checking the number of effective devices and reporting back
2012 May 04
2
btrfs scrub BUG: unable to handle kernel NULL pointer dereference
I think I have some failing hard drives, they are disconnected for now.
stan {~} root# btrfs filesystem show
Label: none uuid: d71404d4-468e-47d5-8f06-3b65fa7776aa
Total devices 2 FS bytes used 6.27GB
devid 1 size 9.31GB used 8.16GB path /dev/sde6
*** Some devices missing
Label: none uuid: b142f575-df1c-4a57-8846-a43b979e2e09
Total devices 8 FS bytes used
2012 Jan 17
8
[RFC][PATCH 1/2] Btrfs: try to allocate new chunks with degenerated profile
If there is no free space, the free space allocator will try to get space from
the block group with the degenerated profile. For example, if there is no free
space in the RAID1 block groups, the allocator will try to allocate space from
the DUP block groups. And besides that, the space reservation has the similar
behaviour: if there is no enough space in the space cache to reserve, it will
reserve
2012 Nov 22
0
raid10 data fs full after degraded mount
Hello,
on a fs with 4 disks, raid 10 for data, one drive was failing and has
been removed. After reboot and ''mount -o degraded...'', the fs looks
full, even though before removal of the failed device it was almost
80% free.
root@fs0:~# df -h /mnt/b
Filesystem Size Used Avail Use% Mounted on
/dev/sde 11T 2.5T 41M 100% /mnt/b
root@fs0:~# btrfs fi df /mnt/b
Data,
2012 Mar 25
3
attempt to access beyond end of device and livelock
Hi Dongyang, Yan,
When testing BTRFS with RAID 0 metadata on linux-3.3, we see discard
ranges exceeding the end of the block device [1], potentially causing
dataloss; when this occurs, filesystem writeback becomes catatonic due
to continual resubmission.
The reproducer is quite simple [2]. Hope this proves useful...
Thanks,
Daniel
--- [1]
attempt to access beyond end of device
ram0: rw=129,
2013 Nov 29
0
BTRFS scrub hung
https://bugzilla.kernel.org/show_bug.cgi?id=66151
# btrfs scrub status /
scrub status for 02184910-5849-489f-b970-3ea35912a7af
scrub started at Tue Nov 26 21:30:01 2013, running for 15 seconds
total bytes scrubbed: 5.38GiB with 0 errors
# btrfs scrub cancel /
ERROR: scrub cancel failed on /: not running
# btrfs scrub resume /
ERROR: scrub is already running.
To cancel use
2013 Mar 28
1
question about replacing a drive in raid10
Hi all,
I have a question about replacing a drive in raid10 (and linux kernel 3.8.4).
A bad disk was physical removed from the server. After this a new disk
was added with "btrfs device add /dev/sdg /btrfs" to the raid10 btrfs
FS.
After this the server was rebooted and I mounted the filesystem in
degraded mode. It seems that a previous started balance continued.
At this point I want to
2008 Dec 04
3
PROBLEM: oops when running fsstress against compressed btrfs filesystem
Chris:
I''m consistently getting oopses when running fsstress against both
single and multiple device compressed btrfs filesystems using kernels
built from the current btrfs-unstable.
In this report, I''m describing an incident with a single device
filesystem. Once the oops occurs, all I/O appears to stop though iowait
is still reported, and fsstress does not make apparent
2011 Jan 18
6
BUG while writing to USB btrfs filesystem
While untar''ing an image to an sd card via a reader, I got the
following bug. The system also has a btrfs root, and a whole swath of
processes went into uninterruptable sleep. I was able to poke around
via ssh and sysrq, and already had netconsole set up to capture the
bug.
Root fs is on /dev/sdi1, and /dev/sdj2 is the card reader which was
the target of the untar.
[29571.448889] sd
2012 Feb 26
0
"device delete" kills contents
Hallo, linux-btrfs,
I''ve (once again) tried "add" and "delete".
First, with 3 devices (partitions):
mkfs.btrfs -d raid0 -m raid1 /dev/sdk1 /dev/sdl1 /dev/sdm1
Mounted (to /mnt/btr), filled with about 100 GByte data.
Then
btrfs device add /dev/sdj1 /mnt/btr
results in
# show
Label: none uuid: 6bd7d4df-e133-47d1-9b19-3c7565428770
Total devices 4 FS bytes