Displaying 20 results from an estimated 20000 matches similar to: "Raid1 degraded mode"
2012 Jan 22
1
Trying to mount RAID1 degraded with removed disk -> open_ctree failed
Hi,
I have setup a RAID1 using 3 devices (500G each) on separate disks.
After removing one disk physically the filesystem cannot be mounted in
degraded nor in recovery mode. When putting the disk back in the
filesystem can be mounted without errors. I did a cold-swap (powercycle
after removal/insertion of the disk).
Here are the details:
- latest kernel 3.2.1 and btrfs-tools on xubuntu
2011 Jan 12
1
Filesystem creation in "degraded mode"
I''ve had a go at determining exactly what happens when you create a
filesystem without enough devices to meet the requested replication
strategy:
# mkfs.btrfs -m raid1 -d raid1 /dev/vdb
# mount /dev/vdb /mnt
# btrfs fi df /mnt
Data: total=8.00MB, used=0.00
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=153.56MB, used=24.00KB
Metadata:
2012 Apr 17
3
Btrfs in degraded mode
Hello,
I have created a btrfs filesystem with RAID1 setup having 2 disks. Everything
works fine but when I try to umount the device and remount it in degraded mode,
the data still goes into both the disk. ideally in degraded mode only one disk
show disk activity and not the failed ones.
System Config:
Base OS: Slackware
kernel: linux 3.3.2
"sar -pd 2 10" shows me that the data is
2011 Jul 12
1
after mounting with -o degraded: ioctl: LOOP_CLR_FD: Device or resource busy
dd if=/dev/null of=img5 bs=1 seek=2G
dd if=/dev/null of=img6 bs=1 seek=2G
mkfs.btrfs -d raid1 -m raid1 img5 img6
losetup /dev/loop4 img5
losetup /dev/loop5 img6
btrfs device scan
mount -t btrfs /dev/loop4 dir
umount dir
losetup -d /dev/loop5
mount -t btrfs -o degraded /dev/loop4 dir
umount dir
losetup -d /dev/loop4
ioctl: LOOP_CLR_FD: Device or resource busy
mkfs.ext3 /dev/loop4
mke2fs 1.39
2012 Oct 23
0
Between single/dup and raid1/raid1
Hello,
today I wanted to remove one drive from raid1, and people at #btrfs advised me
to use ''-dconvert=single'' before ''btrfs device delete''.
I thought of adding ''-mconvert=dup'' too, but the kernel does not let me do that.
It looks like ''dup'' is disallowed for an array of multiple devices. So, to go
back to a single-drive
2012 Nov 22
0
raid10 data fs full after degraded mount
Hello,
on a fs with 4 disks, raid 10 for data, one drive was failing and has
been removed. After reboot and ''mount -o degraded...'', the fs looks
full, even though before removal of the failed device it was almost
80% free.
root@fs0:~# df -h /mnt/b
Filesystem Size Used Avail Use% Mounted on
/dev/sde 11T 2.5T 41M 100% /mnt/b
root@fs0:~# btrfs fi df /mnt/b
Data,
2013 Jun 12
0
Mounting RAID1 writeable with two devices missing but all data present
Hi,
I have potentially found a (minor) bug (or missing feature) in btrfs,
but it might be a misunderstanding, so I like to ask here first before
reporting in Bugzilla.
I have a btrfs file system in RAID1 mode with two missing devices, but
I know that all data is still present on the remaining device. btrfs
refuses mounting this fs in rw mode (too many missing devices), as it
probably just counts
2012 Oct 26
4
Can't replace a faulty disk of raid1
Hello,
I had a raid1 btrfs (540GB) on vanilla 3.6.3, a disk failed, and removed it at
power off, plugged in a new one, partitioned it (to 110GB, by error), and added
it to btrfs.
I tried to remove the missing device, and it said "Input/output error" after a
while. Next attempts simply gave "Invalid argument".
I repartitioned, rebooted the system, and made the partition grow:
2011 Aug 14
3
cant mount degraded (it worked in kernel 2.6.38.8)
# uname -a
Linux dhcppc1 3.0.1-xxxx-std-ipv6-64 #1 SMP Sun Aug 14 17:06:21 CEST
2011 x86_64 x86_64 x86_64 GNU/Linux
mkdir test5
cd test5
dd if=/dev/null of=img5 bs=1 seek=2G
dd if=/dev/null of=img6 bs=1 seek=2G
losetup /dev/loop2 img5
losetup /dev/loop3 img6
mkfs.btrfs -d raid1 -m raid1 /dev/loop2 /dev/loop3
btrfs device scan
btrfs filesystem show
Label: none uuid:
2013 Aug 24
10
Help interpreting RAID1 space allocation
I''ve created a test volume and copied a bulk of data to it, however the
results of the space allocation are confusing at best. I''ve tried to
capture the history of events leading up to the current state. This is
all on a Debian Wheezy system using a 3.10.5 kernel package
(linux-image-3.10-2-amd64) and btrfs tools v0.20-rc1 (Debian package
0.19+20130315-5). The host uses an
2012 Dec 17
5
Feeback on RAID1 feature of Btrfs
Hello,
I''m testing Btrfs RAID1 feature on 3 disks of ~10GB. Last one is not
exactly 10GB (would be too easy).
About the test machine, it''s a kvm vm running an up-to-date archlinux
with linux 3.7 and btrfs-progs 0.19.20121005.
#uname -a
Linux seblu-btrfs-1 3.7.0-1-ARCH #1 SMP PREEMPT Tue Dec 11 15:05:50 CET
2012 x86_64 GNU/Linux
Filesystem was created with :
# mkfs.btrfs -L
2012 Jun 10
1
Centos 6 / Kickstart Using degraded mdadm RAID1
I'm trying to install a bunch of C6 involving initially degraded mdadm RAID 1
Anaconda refuses to let me create a RAID 1 array with only one member.
Based on some reading, it seems that I should be able to use kickstart
with the PRE scripts to do this. However, after trying for a couple of
hours, it doesn't seem that anaconda will allow it, it just boots the
created arrays. At best I end
2011 Aug 06
0
Remove Missing Device from Raid1 error
4 identical devices, raid 1.
Ubuntu 11.10 Alpha 3
I''ve physically unplugged TWO (alternating) devices.
mounted as degraded, filesystem works great.
Run the command "btrfs dev del missing /mnt"
Response:
ERROR: error removing the device ''missing''
No error in dmesg
"btrfs-vol -r missing /mnt"
Response:
removing missing devices from /home
ioctl
2012 May 06
4
btrfs-raid10 <-> btrfs-raid1 confusion
Greetings,
until yesterday I was running a btrfs filesystem across two 2.0 TiB
disks in RAID1 mode for both metadata and data without any problems.
As space was getting short I wanted to extend the filesystem by two
additional drives lying around, which both are 1.0 TiB in size.
Knowing little about the btrfs RAID implementation I thought I had to
switch to RAID10 mode, which I was told is
2009 Nov 27
5
unexpected raid1 behavior?
Hi, I''m starting to play with btrfs on my new computer. I''m running Gentoo and
have compiled the 2.6.31 kernel, enabling btrfs.
Now I have 2 partitions (on 2 different sata disks) that are free for me to
play with, each about 375 gb in size. I wanted to create a "raid1" volume
using these two partitions, so I did:
# mkfs.btrfs -d raid1 /dev/sda5 /dev/sdb5
# mount
2012 Aug 12
13
raw partition or LV for btrfs?
I notice this question on the wiki/faq:
https://btrfs.wiki.kernel.org/index.php/UseCases#What_is_best_practice_when_partitioning_a_device_that_holds_one_or_more_btr-filesystems
and as it hasn''t been answered, can anyone make any comments on the subject
Various things come to mind:
a) partition the disk, create an LVM partition, and create lots of small
LVs, format each as btrfs
b)
2019 Feb 25
0
Problem with mdadm, raid1 and automatically adds any disk to raid
> Hi.
>
> CENTOS 7.6.1810, fresh install - use this as a base to create/upgrade
> new/old machines.
>
> I was trying to setup two disks as a RAID1 array, using these lines
>
> mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sdb1
> /dev/sdc1
> mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sdb2
> /dev/sdc2
> mdadm
2011 Jan 21
0
btrfs RAID1 woes and tiered storage
I''ve been experimenting lately with btrfs RAID1 implementation and have to say
that it is performing quite well, but there are few problems:
* when I purposefully damage partitions on which btrfs stores data (for
example, by changing the case of letters) it will read the other copy and
return correct data. It doesn''t report in dmesg this fact every time, but it
does
2012 Mar 02
1
nocow flags
I set the C (NOCOW) and z (Not_Compressed) flags on a folder but the extent counts of files contained there keep increasing.
Said files are large and frequently modified but not changing in size. This does not happen when the filesystem is mounted with nodatacow.
I''m using this as a workaround since subvolumes can''t be mounted with different options simultaneously. ie. one with
2011 May 20
1
btrfsck: couldn't open because of unsupported option features (8)
After upgrading from 2.6.39-rc7 to 2.6.39 this morning, I tried to
mount my 3 disk btrfs volume (no subvolumes, space caching enabled,
lzo compression) and received some parent transid errors (going back
to rc7 didn''t help, though):
btrfs: disk space caching is enabled
parent transid verify failed on 6038227976192 wanted 337418 found 337853
parent transid verify failed on 6038227976192