similar to: Remove Missing Device from Raid1 error

Displaying 20 results from an estimated 40000 matches similar to: "Remove Missing Device from Raid1 error"

2012 Jan 22
1
Trying to mount RAID1 degraded with removed disk -> open_ctree failed
Hi, I have setup a RAID1 using 3 devices (500G each) on separate disks. After removing one disk physically the filesystem cannot be mounted in degraded nor in recovery mode. When putting the disk back in the filesystem can be mounted without errors. I did a cold-swap (powercycle after removal/insertion of the disk). Here are the details: - latest kernel 3.2.1 and btrfs-tools on xubuntu
2013 Jun 12
0
Mounting RAID1 writeable with two devices missing but all data present
Hi, I have potentially found a (minor) bug (or missing feature) in btrfs, but it might be a misunderstanding, so I like to ask here first before reporting in Bugzilla. I have a btrfs file system in RAID1 mode with two missing devices, but I know that all data is still present on the remaining device. btrfs refuses mounting this fs in rw mode (too many missing devices), as it probably just counts
2012 Oct 23
0
Between single/dup and raid1/raid1
Hello, today I wanted to remove one drive from raid1, and people at #btrfs advised me to use ''-dconvert=single'' before ''btrfs device delete''. I thought of adding ''-mconvert=dup'' too, but the kernel does not let me do that. It looks like ''dup'' is disallowed for an array of multiple devices. So, to go back to a single-drive
2012 Oct 26
4
Can't replace a faulty disk of raid1
Hello, I had a raid1 btrfs (540GB) on vanilla 3.6.3, a disk failed, and removed it at power off, plugged in a new one, partitioned it (to 110GB, by error), and added it to btrfs. I tried to remove the missing device, and it said "Input/output error" after a while. Next attempts simply gave "Invalid argument". I repartitioned, rebooted the system, and made the partition grow:
2011 Jul 12
1
after mounting with -o degraded: ioctl: LOOP_CLR_FD: Device or resource busy
dd if=/dev/null of=img5 bs=1 seek=2G dd if=/dev/null of=img6 bs=1 seek=2G mkfs.btrfs -d raid1 -m raid1 img5 img6 losetup /dev/loop4 img5 losetup /dev/loop5 img6 btrfs device scan mount -t btrfs /dev/loop4 dir umount dir losetup -d /dev/loop5 mount -t btrfs -o degraded /dev/loop4 dir umount dir losetup -d /dev/loop4 ioctl: LOOP_CLR_FD: Device or resource busy mkfs.ext3 /dev/loop4 mke2fs 1.39
2012 Nov 22
0
raid10 data fs full after degraded mount
Hello, on a fs with 4 disks, raid 10 for data, one drive was failing and has been removed. After reboot and ''mount -o degraded...'', the fs looks full, even though before removal of the failed device it was almost 80% free. root@fs0:~# df -h /mnt/b Filesystem Size Used Avail Use% Mounted on /dev/sde 11T 2.5T 41M 100% /mnt/b root@fs0:~# btrfs fi df /mnt/b Data,
2012 Jan 13
5
Can't resize second device in RAID1
Hi, the situation: Label: ''RootFS''  uuid: c87975a0-a575-405e-9890-d3f7f25bbd96     Total devices 2 FS bytes used 284.98GB     devid    2 size 311.82GB used 286.51GB path /dev/sdb3     devid    1 size 897.76GB used 286.51GB path /dev/sda3 RootFS created when sda3 was 897.76GB and sdb3 311.82GB. I have now freed other space on sdb. So I deleted sdb3 and recreated it occupying all
2009 Nov 27
5
unexpected raid1 behavior?
Hi, I''m starting to play with btrfs on my new computer. I''m running Gentoo and have compiled the 2.6.31 kernel, enabling btrfs. Now I have 2 partitions (on 2 different sata disks) that are free for me to play with, each about 375 gb in size. I wanted to create a "raid1" volume using these two partitions, so I did: # mkfs.btrfs -d raid1 /dev/sda5 /dev/sdb5 # mount
2011 Jan 21
0
btrfs RAID1 woes and tiered storage
I''ve been experimenting lately with btrfs RAID1 implementation and have to say that it is performing quite well, but there are few problems: * when I purposefully damage partitions on which btrfs stores data (for example, by changing the case of letters) it will read the other copy and return correct data. It doesn''t report in dmesg this fact every time, but it does
2013 Aug 24
10
Help interpreting RAID1 space allocation
I''ve created a test volume and copied a bulk of data to it, however the results of the space allocation are confusing at best. I''ve tried to capture the history of events leading up to the current state. This is all on a Debian Wheezy system using a 3.10.5 kernel package (linux-image-3.10-2-amd64) and btrfs tools v0.20-rc1 (Debian package 0.19+20130315-5). The host uses an
2011 Aug 14
3
cant mount degraded (it worked in kernel 2.6.38.8)
# uname -a Linux dhcppc1 3.0.1-xxxx-std-ipv6-64 #1 SMP Sun Aug 14 17:06:21 CEST 2011 x86_64 x86_64 x86_64 GNU/Linux mkdir test5 cd test5 dd if=/dev/null of=img5 bs=1 seek=2G dd if=/dev/null of=img6 bs=1 seek=2G losetup /dev/loop2 img5 losetup /dev/loop3 img6 mkfs.btrfs -d raid1 -m raid1 /dev/loop2 /dev/loop3 btrfs device scan btrfs filesystem show Label: none uuid:
2012 May 06
4
btrfs-raid10 <-> btrfs-raid1 confusion
Greetings, until yesterday I was running a btrfs filesystem across two 2.0 TiB disks in RAID1 mode for both metadata and data without any problems. As space was getting short I wanted to extend the filesystem by two additional drives lying around, which both are 1.0 TiB in size. Knowing little about the btrfs RAID implementation I thought I had to switch to RAID10 mode, which I was told is
2012 Apr 17
3
Btrfs in degraded mode
Hello, I have created a btrfs filesystem with RAID1 setup having 2 disks. Everything works fine but when I try to umount the device and remount it in degraded mode, the data still goes into both the disk. ideally in degraded mode only one disk show disk activity and not the failed ones. System Config: Base OS: Slackware kernel: linux 3.3.2 "sar -pd 2 10" shows me that the data is
2013 Oct 04
0
Recovering btrfs fs after "failed to read chunk root"
So I''m writing my (dis)adventure with btrfs here hoping to help the developers or someone with similar problems. I had a btrfs filesystem at work, using two 1TB disks, raid1 for both data and metadata. A week ago one of the two disks start having hundreds of relocated sectors, so I decide to change it. I remove the failing disk, mount with -o degraded and every works fine. The day later
2011 Jan 06
0
Raid1 degraded mode
I am trying to understand how btrfs works with Raid1. Is it possible to create the filesystem with -m raid1 -d raid1 in which there is only one device available when the filesystem is created. Is it possible to refer to a second device as "missing" The use case I am thinking of is converting an existing raid1 setup from mdm + lvm + ext4 to btrfs with raid1 and subvolumes. I would
2012 Dec 17
5
Feeback on RAID1 feature of Btrfs
Hello, I''m testing Btrfs RAID1 feature on 3 disks of ~10GB. Last one is not exactly 10GB (would be too easy). About the test machine, it''s a kvm vm running an up-to-date archlinux with linux 3.7 and btrfs-progs 0.19.20121005. #uname -a Linux seblu-btrfs-1 3.7.0-1-ARCH #1 SMP PREEMPT Tue Dec 11 15:05:50 CET 2012 x86_64 GNU/Linux Filesystem was created with : # mkfs.btrfs -L
2013 May 13
7
Remove a materially failed device from a Btrfs "single-raid" using partitions
Hello, I am on Ubuntu Server 13.04 with Linux 3.8. I''ve created a "single-raid" using /dev/sd{a,b,c,d}{1,3}. One of my hard drives has failed, I mean it''s materially dead. :~$ sudo btrfs filesystem show Label: none uuid: 40886f51-8c9b-4be1-8721-83bf5653d2a0 Total devices 5 FS bytes used 226.90GB devid 4 size 37.27GB used 31.01GB path /dev/sdd1
2013 Oct 06
5
btrfs device delete problem
Hi, I''m getting an error when trying to delete a device from a raid1 (data and metadata mirrored). > btrfs filesystem show failed to read /dev/sr0 Label: none uuid: 78b5162b-489e-4de1-a989-a47b91adef50 Total devices 2 FS bytes used 107.64GB devid 2 size 149.05GB used 109.01GB path /dev/sdh1 devid 1 size 156.81GB used 109.03GB path /dev/sdb6 Btrfs v0.20-rc1 >
2013 Mar 26
1
[bug] mount and /proc/mounts disagrees
3.8.0+ #3 This happened after ''umount /btrfs'' was interrupted by ctl-C # mount | egrep btrfs /dev/mapper/mpathe on /btrfs type btrfs (rw,degraded) # cat /etc/mtab | egrep btrfs /dev/mapper/mpathe /btrfs btrfs rw,degraded 0 0 # cat /proc/mounts | egrep btrfs # umount /btrfs umount: /btrfs: not mounted # -Anand -- To unsubscribe from this list: send the line "unsubscribe
2013 Mar 28
1
question about replacing a drive in raid10
Hi all, I have a question about replacing a drive in raid10 (and linux kernel 3.8.4). A bad disk was physical removed from the server. After this a new disk was added with "btrfs device add /dev/sdg /btrfs" to the raid10 btrfs FS. After this the server was rebooted and I mounted the filesystem in degraded mode. It seems that a previous started balance continued. At this point I want to