similar to: btrfs device delete problem

Displaying 20 results from an estimated 10000 matches similar to: "btrfs device delete problem"

2013 Aug 24
10
Help interpreting RAID1 space allocation
I''ve created a test volume and copied a bulk of data to it, however the results of the space allocation are confusing at best. I''ve tried to capture the history of events leading up to the current state. This is all on a Debian Wheezy system using a 3.10.5 kernel package (linux-image-3.10-2-amd64) and btrfs tools v0.20-rc1 (Debian package 0.19+20130315-5). The host uses an
2012 May 04
2
btrfs scrub BUG: unable to handle kernel NULL pointer dereference
I think I have some failing hard drives, they are disconnected for now. stan {~} root# btrfs filesystem show Label: none uuid: d71404d4-468e-47d5-8f06-3b65fa7776aa Total devices 2 FS bytes used 6.27GB devid 1 size 9.31GB used 8.16GB path /dev/sde6 *** Some devices missing Label: none uuid: b142f575-df1c-4a57-8846-a43b979e2e09 Total devices 8 FS bytes used
2012 Oct 26
4
Can't replace a faulty disk of raid1
Hello, I had a raid1 btrfs (540GB) on vanilla 3.6.3, a disk failed, and removed it at power off, plugged in a new one, partitioned it (to 110GB, by error), and added it to btrfs. I tried to remove the missing device, and it said "Input/output error" after a while. Next attempts simply gave "Invalid argument". I repartitioned, rebooted the system, and made the partition grow:
2013 May 13
7
Remove a materially failed device from a Btrfs "single-raid" using partitions
Hello, I am on Ubuntu Server 13.04 with Linux 3.8. I''ve created a "single-raid" using /dev/sd{a,b,c,d}{1,3}. One of my hard drives has failed, I mean it''s materially dead. :~$ sudo btrfs filesystem show Label: none uuid: 40886f51-8c9b-4be1-8721-83bf5653d2a0 Total devices 5 FS bytes used 226.90GB devid 4 size 37.27GB used 31.01GB path /dev/sdd1
2013 Dec 07
8
Can't remove empty directory after kernel panic, no errors in dmesg
Hi List, so first the basics. I''m running Arch Linux with 3.13-rc2, btrfs-progs 0.20rc1.3-2 from the repo and I''m using a SSD. So I was having kernel panics with my USB 3.0 Gigabit card and was trying to get a panic output. These panics are intermittent and most often happen while using Chromium. Anyway so my system paniced while I was in Chromium. After the reboot Chromium
2011 Aug 14
3
cant mount degraded (it worked in kernel 2.6.38.8)
# uname -a Linux dhcppc1 3.0.1-xxxx-std-ipv6-64 #1 SMP Sun Aug 14 17:06:21 CEST 2011 x86_64 x86_64 x86_64 GNU/Linux mkdir test5 cd test5 dd if=/dev/null of=img5 bs=1 seek=2G dd if=/dev/null of=img6 bs=1 seek=2G losetup /dev/loop2 img5 losetup /dev/loop3 img6 mkfs.btrfs -d raid1 -m raid1 /dev/loop2 /dev/loop3 btrfs device scan btrfs filesystem show Label: none uuid:
2013 Aug 16
4
How btrfs resize should work ?
Hi, I am working on system storage manager (ssm) trying to implement btrfs resize correctly, however I have some troubles with it. # mkfs.btrfs /dev/sda /dev/sdb # mount /dev/sda /mnt/test # btrfs filesystem show failed to open /dev/sr0: No medium found Label: none uuid: 8dce5578-a2bc-416e-96fd-16a2f4f770b7 Total devices 2 FS bytes used 28.00KB devid 2 size 50.00GB used 2.01GB path
2012 Jan 22
1
Trying to mount RAID1 degraded with removed disk -> open_ctree failed
Hi, I have setup a RAID1 using 3 devices (500G each) on separate disks. After removing one disk physically the filesystem cannot be mounted in degraded nor in recovery mode. When putting the disk back in the filesystem can be mounted without errors. I did a cold-swap (powercycle after removal/insertion of the disk). Here are the details: - latest kernel 3.2.1 and btrfs-tools on xubuntu
2012 Apr 17
3
Btrfs in degraded mode
Hello, I have created a btrfs filesystem with RAID1 setup having 2 disks. Everything works fine but when I try to umount the device and remount it in degraded mode, the data still goes into both the disk. ideally in degraded mode only one disk show disk activity and not the failed ones. System Config: Base OS: Slackware kernel: linux 3.3.2 "sar -pd 2 10" shows me that the data is
2011 Jan 12
1
Filesystem creation in "degraded mode"
I''ve had a go at determining exactly what happens when you create a filesystem without enough devices to meet the requested replication strategy: # mkfs.btrfs -m raid1 -d raid1 /dev/vdb # mount /dev/vdb /mnt # btrfs fi df /mnt Data: total=8.00MB, used=0.00 System, DUP: total=8.00MB, used=4.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=153.56MB, used=24.00KB Metadata:
2013 Aug 04
2
Unable to unmount filesystem (bug in kernel reported in kern.log)
I tried to unmount a btrfs filesystem located in a external usb hard drive. This belonged to a raid1 data and metadata filesystem mounted in degraded mode. Unfortunately, I couldn''t save the image of filesystem but I could see this error in kern.log: Aug 4 02:23:55 rohan kernel: [ 3747.840027] usb 1-3: new high-speed USB device number 8 using ehci_hcd Aug 4 02:23:55 rohan kernel: [
2012 Oct 25
46
[RFC] New attempt to a better "btrfs fi df"
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi all, this is a new attempt to improve the output of the command "btrfs fi df". The previous attempt received a good reception. However there was no a general consensus about the wording. Moreover I still didn''t understand how btrfs was using the disks. A my first attempt was to develop a new command which shows how the disks
2012 Jul 14
2
bug: raid10 filesystem has suddenly ceased to mount
Hi! The problem is that the BTRFS raid10 filesystem without any understandable cause refuses to mount. Here is dmesg output: [77847.845540] device label linux-btrfs-raid10 devid 3 transid 45639 /dev/sdc1 [77848.633912] btrfs: allowing degraded mounts [77848.633917] btrfs: enabling auto defrag [77848.633919] btrfs: use lzo compression [77848.633922] btrfs: turning on flush-on-commit [77848.658879]
2013 Mar 29
8
minimum kernel version for btrfsprogs.0.20?
Creating a btrfs file system using btrfs-progs-0.20.rc1.20130308git704a08c-1.fc19, and either kernel 3.6.10-4.fc18 or 3.9.0-0.rc3.git0.3.fc19, makes a file system that cannot be mounted by kernel 3.6.10-4.fc18. It can be mounted by kernel 3.8.4. I haven''t tested any other 3.8, or any 3.7 kernels. Is this expected? dmesg reports: [ 300.014764] btrfs: disk space caching is enabled [
2013 Jan 12
4
obscure out of space, df and fi df are way off
Very low priority. No user data at risk. 8GB virtual disk being installed to, and the installer is puking. I''m trying to figure out why. I first get an rsync error 12, followed by the installer crashing. What''s interesting is this, deleting irrelevant source file systems, just showing the mounts for the installed system: [root@localhost tmp]# df Filesystem
2011 Jul 12
1
after mounting with -o degraded: ioctl: LOOP_CLR_FD: Device or resource busy
dd if=/dev/null of=img5 bs=1 seek=2G dd if=/dev/null of=img6 bs=1 seek=2G mkfs.btrfs -d raid1 -m raid1 img5 img6 losetup /dev/loop4 img5 losetup /dev/loop5 img6 btrfs device scan mount -t btrfs /dev/loop4 dir umount dir losetup -d /dev/loop5 mount -t btrfs -o degraded /dev/loop4 dir umount dir losetup -d /dev/loop4 ioctl: LOOP_CLR_FD: Device or resource busy mkfs.ext3 /dev/loop4 mke2fs 1.39
2013 Sep 23
12
balance induced csum errors
SAMSUNG SSD 830 Series CPU0: IntelĀ® Core(TM) i7-2820QM CPU @ 2.30GHz (fam: 06, model: 2a, stepping: 07) 8GB RAM (quite heavily tested, not recently, with several days of memtest) kernel 3.11.1-200.fc19.x86_64 running on baremetal btrfs-progs-0.20.rc1.20130308git704a08c-1.fc19.x86_64 Today I did a scrub on a btrfs volume, with no message or errors in console or dmesg or journal. Immediately after
2013 Oct 05
10
Linux Arch: kernel BUG at fs/btrfs/inode.c:873!
Hi, I have a home server on Linux Arch (kernel 3.11.2) that uses multi-device btrfs on root filesystem. Until recently it worked completely fine. And yesterday I rebooted it and the machine did not wake up. I booted from a USB (kernel 3.10) and tried to mount the filesystem. Here is OOPs I see [ 41.676217] device fsid 25e6a6fa-fe1f-4be5-a638-eeac948f8c21 devid 8 transid 164237 /dev/sda [
2012 Dec 17
5
Feeback on RAID1 feature of Btrfs
Hello, I''m testing Btrfs RAID1 feature on 3 disks of ~10GB. Last one is not exactly 10GB (would be too easy). About the test machine, it''s a kvm vm running an up-to-date archlinux with linux 3.7 and btrfs-progs 0.19.20121005. #uname -a Linux seblu-btrfs-1 3.7.0-1-ARCH #1 SMP PREEMPT Tue Dec 11 15:05:50 CET 2012 x86_64 GNU/Linux Filesystem was created with : # mkfs.btrfs -L
2019 Jan 30
3
C7, mdadm issues
Il 30/01/19 16:49, Simon Matter ha scritto: >> On 01/30/19 03:45, Alessandro Baggi wrote: >>> Il 29/01/19 20:42, mark ha scritto: >>>> Alessandro Baggi wrote: >>>>> Il 29/01/19 18:47, mark ha scritto: >>>>>> Alessandro Baggi wrote: >>>>>>> Il 29/01/19 15:03, mark ha scritto: >>>>>>>