similar to: Filesystem creation in "degraded mode"

Displaying 20 results from an estimated 500 matches similar to: "Filesystem creation in "degraded mode""

2013 Apr 03
2
[bug] btrfs fi df doesn't show raid type after balance
Did something break.. ? we are not reporting raid type after balance. ----------- # btrfs fi df /btrfs Data, RAID0: total=2.00GB, used=2.03MB Data: total=8.00MB, used=0.00 System, RAID0: total=16.00MB, used=4.00KB System: total=4.00MB, used=0.00 Metadata, RAID0: total=2.00GB, used=216.00KB Metadata: total=8.00MB, used=4.00KB # btrfs bal /btrfs Done, had to relocate 5 out of 5 chunks # btrfs fi
2012 Oct 25
46
[RFC] New attempt to a better "btrfs fi df"
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi all, this is a new attempt to improve the output of the command "btrfs fi df". The previous attempt received a good reception. However there was no a general consensus about the wording. Moreover I still didn''t understand how btrfs was using the disks. A my first attempt was to develop a new command which shows how the disks
2012 Dec 17
5
Feeback on RAID1 feature of Btrfs
Hello, I''m testing Btrfs RAID1 feature on 3 disks of ~10GB. Last one is not exactly 10GB (would be too easy). About the test machine, it''s a kvm vm running an up-to-date archlinux with linux 3.7 and btrfs-progs 0.19.20121005. #uname -a Linux seblu-btrfs-1 3.7.0-1-ARCH #1 SMP PREEMPT Tue Dec 11 15:05:50 CET 2012 x86_64 GNU/Linux Filesystem was created with : # mkfs.btrfs -L
2013 Jan 12
4
obscure out of space, df and fi df are way off
Very low priority. No user data at risk. 8GB virtual disk being installed to, and the installer is puking. I''m trying to figure out why. I first get an rsync error 12, followed by the installer crashing. What''s interesting is this, deleting irrelevant source file systems, just showing the mounts for the installed system: [root@localhost tmp]# df Filesystem
2012 Oct 04
8
[PATCH][BTRFS-PROGS][V3] btrfs filesystem df
Hi Chris, this serie of patches updated the command "btrfs filesystem df". I update this command because it is not so easy to get the information about the disk usage from the command "fi df" and "fi show". This patch was the result of some discussions on the btrfs mailing list. Many thanks to all the contributors. From the man page (see 2nd patch): [...] The
2012 May 29
0
[btrfs-progs] btrfs fi df output
Hello, I have a question regarding "btrfs filesystem df"output. # btrfs fi df /mnt/test Data: total=3.01GB, used=512.19MB System, DUP: total=8.00MB, used=4.00KB System: total=4.00MB, used=0.00 <= What this means? For what is used? I''ve never seen this incremented Metadata, DUP: total=2.50GB, used=676.00KB Metadata: total=8.00MB, used=0.00
2011 Jul 11
4
extremely slow syncing on btrfs with 2.6.39.1
I''ve been monitoring the lists for a while now but didn''t see this problem mentioned in particular: I''ve got a fairly standard desktop system at home, 700gb WD drive, nothing special, with 2 btrfs filesystems and some snapshots. The system runs for days, and I''ve noticed unusual disk activity the other evening - turns out that it''s taking forever to
2012 Feb 26
0
"device delete" kills contents
Hallo, linux-btrfs, I''ve (once again) tried "add" and "delete". First, with 3 devices (partitions): mkfs.btrfs -d raid0 -m raid1 /dev/sdk1 /dev/sdl1 /dev/sdm1 Mounted (to /mnt/btr), filled with about 100 GByte data. Then btrfs device add /dev/sdj1 /mnt/btr results in # show Label: none uuid: 6bd7d4df-e133-47d1-9b19-3c7565428770 Total devices 4 FS bytes
2012 May 05
5
Is it possible to reclaim block groups once they are allocated to data or metadata?
Hello list, recently reformatted my home partition from XFS to RAID1 btrfs. I used the default options to mkfs.btrfs except for enabling raid1 for data as well as metadata. Filesystem is made up of two 1TB drives. mike@mercury (0) pts/3 ~ $ sudo btrfs filesystem show Label: none uuid: f08a8896-e03e-4064-9b94-9342fb547e47 Total devices 2 FS bytes used 888.06GB devid 1 size 931.51GB used
2012 Nov 22
0
raid10 data fs full after degraded mount
Hello, on a fs with 4 disks, raid 10 for data, one drive was failing and has been removed. After reboot and ''mount -o degraded...'', the fs looks full, even though before removal of the failed device it was almost 80% free. root@fs0:~# df -h /mnt/b Filesystem Size Used Avail Use% Mounted on /dev/sde 11T 2.5T 41M 100% /mnt/b root@fs0:~# btrfs fi df /mnt/b Data,
2012 Apr 08
4
[PATCH] Revert "Btrfs: increase the global block reserve estimates"
This reverts commit 5500cdbe14d7435e04f66ff3cfb8ecd8b8e44ebf. We had numerous reports of premature ENOSPC that were bisected to this patch. Reverting will not break things but a warning in ''use_block_rsv'' may show up in the syslog. There''s no alternative fix in sight and the ENOSPC problem affects all 3.3 btrfs users during normal filesystem use. CC:
2010 Dec 29
1
Reproducible kernel BUG while using VirtualBox:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 All, I believe that I can pretty reliably reproduce the BUG mentioned in the attached dmesg output. (This doesn''t mean that you can, but I''ll detail what I''ve done here.) [This BUG is the same one that I reported last night.] 1) Create a 2 GB dynamically expanding disk. 2) Attach it to a VirtualBox machine. 3) Start the
2012 May 06
4
btrfs-raid10 <-> btrfs-raid1 confusion
Greetings, until yesterday I was running a btrfs filesystem across two 2.0 TiB disks in RAID1 mode for both metadata and data without any problems. As space was getting short I wanted to extend the filesystem by two additional drives lying around, which both are 1.0 TiB in size. Knowing little about the btrfs RAID implementation I thought I had to switch to RAID10 mode, which I was told is
2011 Apr 09
16
wrong values in "df" and "btrfs filesystem df"
Hallo, linux-btrfs, First I create an array of 2 disks with mkfs.btrfs -d raid0 -m raid1 /dev/sdb1 /dev/sdd1 and mount it at /srv/MM. Then I fill it with about 1,6 TByte. And then I add /dev/sde1 via btrfs device add /dev/sde1 /srv/MM btrfs filesystem balance /srv/MM (it run about 20 hours) Then I work on it, copy some new files, delete some old files - all works well. Only df
2007 Oct 29
9
zpool question
hello folks, I am running Solaris 10 U3 and I have small problem that I dont know how to fix... I had a pool of two drives: bash-3.00# zpool status pool: mypool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 emcpower0a ONLINE 0 0 0 emcpower1a ONLINE
2013 Jun 05
8
btrfs raid1 on 16TB goes read-only after "btrfs: block rsv returned -28"
Dear Devs, I have x4 4TB HDDs formatted with: mkfs.btrfs -L bu-16TB_0 -d raid1 -m raid1 /dev/sd[cdef] /etc/fstab mounts with the options: noatime,noauto,space_cache,inode_cache All on kernel 3.8.13. Upon using rsync to copy some heavily hardlinked backups from ReiserFS, I''ve seen: The following "block rsv returned -28" is repeated 7 times until there is a call trace
2013 Oct 23
0
Soft lockup btrfs-transacti:680
When I try to umount btrfs filesystem I get always this error with kernel 3.11.4 and 3.11.3, but I can mount and umount without error on kernel 3.11.2. Exact error messages are: BUG: soft lockup - CPU#0 stuck for 23s! [btrfs-transacti:680] BUG: soft lockup - CPU#1 stuck for 23s! [umount:1575] I''m on Fedora 19 I have run scrub and there are no errors: # btrfs scrub status /home scrub
2012 Dec 13
22
[PATCH] Btrfs: fix a deadlock on chunk mutex
An user reported that he has hit an annoying deadlock while playing with ceph based on btrfs. Current updating device tree requires space from METADATA chunk, so we -may- need to do a recursive chunk allocation when adding/updating dev extent, that is where the deadlock comes from. If we use SYSTEM metadata to update device tree, we can avoid the recursive stuff. Reported-by: Jim Schutt
2013 Sep 23
12
balance induced csum errors
SAMSUNG SSD 830 Series CPU0: Intel® Core(TM) i7-2820QM CPU @ 2.30GHz (fam: 06, model: 2a, stepping: 07) 8GB RAM (quite heavily tested, not recently, with several days of memtest) kernel 3.11.1-200.fc19.x86_64 running on baremetal btrfs-progs-0.20.rc1.20130308git704a08c-1.fc19.x86_64 Today I did a scrub on a btrfs volume, with no message or errors in console or dmesg or journal. Immediately after
2011 Sep 27
2
high CPU usage and low perf
Hiya, Recently, a btrfs file system of mine started to behave very poorly with some btrfs kernel tasks taking 100% of CPU time. # btrfs fi show /dev/sdb Label: none uuid: b3ce8b16-970e-4ba8-b9d2-4c7de270d0f1 Total devices 3 FS bytes used 4.25TB devid 2 size 2.73TB used 1.52TB path /dev/sdc devid 1 size 2.70TB used 1.49TB path /dev/sda4 devid 3 size