Displaying 20 results from an estimated 3000 matches similar to: "wrong values in "df" and "btrfs filesystem df""
2011 Nov 02
2
what does "scrub" mean?
Hallo,
I''d like to get some explanations ...
# btrfs filesystem show
Label: ''MMedia'' uuid: 120b036a-883f-46aa-bd9a-cb6a1897c8d2
Total devices 3 FS bytes used 3.80TB
devid 1 size 1.82TB used 1.29TB path /dev/sdg1
devid 3 size 1.81TB used 1.29TB path /dev/sdc1
devid 2 size 1.81TB used 1.28TB path /dev/sdb1
Btrfs Btrfs v0.19
# btrfs filesystem df /srv/MM
2010 Dec 01
12
Fsck, parent transid verify failed
Hi folks!
Been using btrfs for quite a while now, worked great until now.
Got power-loss on my machine and now i have the "parent transid verify
failed on X wanted X found X" problem.
So I can''t get it to mount.
My btrfs is spread over sda (2tb), sdc(2tb), sdd(1tb).
Is this something that an offline fsck could fix ?
If so is the fsck-util being developed ?
Is there a way to
2009 Nov 19
10
Unable to mount loopback devices in RAID mode
Hi!
I recently tried to mount a filesystem in RAID1 mode using loopback devices. I followed the instructions at [1]. Here''s exactly what I''ve done:
$ dd if=/dev/zero of=raid1_0.img bs=1M count=500
$ dd if=/dev/zero of=raid1_1.img bs=1M count=500
$ mkfs.btrfs -m raid1 -d raid1 raid1_0.img raid1_1.img
$ losetup /dev/loop0 raid1_0.img
$ losetup /dev/loop1 raid1_1.img
$ mount -t
2013 May 13
7
Remove a materially failed device from a Btrfs "single-raid" using partitions
Hello,
I am on Ubuntu Server 13.04 with Linux 3.8.
I''ve created a "single-raid" using /dev/sd{a,b,c,d}{1,3}. One of my hard
drives has failed, I mean it''s materially dead.
:~$ sudo btrfs filesystem show
Label: none uuid: 40886f51-8c9b-4be1-8721-83bf5653d2a0
Total devices 5 FS bytes used 226.90GB
devid 4 size 37.27GB used 31.01GB path /dev/sdd1
2009 Nov 27
5
unexpected raid1 behavior?
Hi, I''m starting to play with btrfs on my new computer. I''m running Gentoo and
have compiled the 2.6.31 kernel, enabling btrfs.
Now I have 2 partitions (on 2 different sata disks) that are free for me to
play with, each about 375 gb in size. I wanted to create a "raid1" volume
using these two partitions, so I did:
# mkfs.btrfs -d raid1 /dev/sda5 /dev/sdb5
# mount
2011 Apr 01
15
btrfs balancing start - and stop?
Hi,
My company is testing btrfs (kernel 2.6.38) on a slave MySQL database
server with a 195Gb filesystem (of which about 123Gb is used). So far,
we''re quite impressed with the performance. Our database loads are high,
and if filesystem performance wasn''t good, MySQL replication wouldn''t
be able to keep up and the slave latency would begin to climb. This
though, is
2012 May 07
53
kernel 3.3.4 damages filesystem (?)
Hallo,
"never change a running system" ...
For some months I run btrfs unter kernel 3.2.5 and 3.2.9, without
problems.
Yesterday I compiled kernel 3.3.4, and this morning I started the
machine with this kernel. There may be some ugly problems.
Copying something into the btrfs "directory" worked well for some files,
and then I got error messages (I''ve not
2011 Jan 12
1
Filesystem creation in "degraded mode"
I''ve had a go at determining exactly what happens when you create a
filesystem without enough devices to meet the requested replication
strategy:
# mkfs.btrfs -m raid1 -d raid1 /dev/vdb
# mount /dev/vdb /mnt
# btrfs fi df /mnt
Data: total=8.00MB, used=0.00
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=153.56MB, used=24.00KB
Metadata:
2011 Feb 05
2
Strangeness on btrfs balance..
Hi there...
I have kernel version 2.6.36.3, compiled with gcc 4.4.5, btrfstools
version 0.19+20101101
I have a btrfs filesystem (/data) consisting of two 1TB hard disks, raid0.
I added in another 1TB hard drive.
root@X86-64:~# btrfs filesystem show
failed to read /dev/sdh
failed to read /dev/sdg
failed to read /dev/sdf
failed to read /dev/sde
failed to read /dev/sr0
failed to read /dev/fd0u800
2013 Jun 03
3
csum failed during rebalance
Hi,
I added a new drive to an existing RAID 0 array. Every
attempt to rebalance the array fails:
# btrfs filesystem balance /share/bd8
ERROR: error during balancing ''/share/bd8'' - Input/output error
# dmesg | tail
btrfs: found 1 extents
btrfs: relocating block group 10752513540096 flags 1
btrfs: found 5 extents
btrfs: found 5 extents
btrfs: relocating block group 10751439798272
2011 Sep 27
2
high CPU usage and low perf
Hiya,
Recently,
a btrfs file system of mine started to behave very poorly with
some btrfs kernel tasks taking 100% of CPU time.
# btrfs fi show /dev/sdb
Label: none uuid: b3ce8b16-970e-4ba8-b9d2-4c7de270d0f1
Total devices 3 FS bytes used 4.25TB
devid 2 size 2.73TB used 1.52TB path /dev/sdc
devid 1 size 2.70TB used 1.49TB path /dev/sda4
devid 3 size
2020 Sep 09
4
Btrfs RAID-10 performance
Hi, thank you for your reply. I'll continue inline...
Dne 09.09.2020 v 3:15 John Stoffel napsal(a):
> Miloslav> Hello,
> Miloslav> I sent this into the Linux Kernel Btrfs mailing list and I got reply:
> Miloslav> "RAID-1 would be preferable"
> Miloslav> (https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2112 at lechevalier.se/T/).
>
2012 Oct 25
46
[RFC] New attempt to a better "btrfs fi df"
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi all,
this is a new attempt to improve the output of the command "btrfs fi df".
The previous attempt received a good reception. However there was no a
general consensus about the wording.
Moreover I still didn''t understand how btrfs was using the disks.
A my first attempt was to develop a new command which shows how the
disks
2013 Apr 03
2
[bug] btrfs fi df doesn't show raid type after balance
Did something break.. ? we are not reporting raid type after balance.
-----------
# btrfs fi df /btrfs
Data, RAID0: total=2.00GB, used=2.03MB
Data: total=8.00MB, used=0.00
System, RAID0: total=16.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID0: total=2.00GB, used=216.00KB
Metadata: total=8.00MB, used=4.00KB
# btrfs bal /btrfs
Done, had to relocate 5 out of 5 chunks
# btrfs fi
2020 Sep 07
4
Btrfs RAID-10 performance
Hello,
I sent this into the Linux Kernel Btrfs mailing list and I got reply:
"RAID-1 would be preferable"
(https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2112 at lechevalier.se/T/).
May I ask you for the comments as from people around the Dovecot?
We are using btrfs RAID-10 (/data, 4.7TB) on a physical Supermicro
server with Intel(R) Xeon(R) CPU E5-2620 v4 @
2011 Jan 18
6
BUG while writing to USB btrfs filesystem
While untar''ing an image to an sd card via a reader, I got the
following bug. The system also has a btrfs root, and a whole swath of
processes went into uninterruptable sleep. I was able to poke around
via ssh and sysrq, and already had netconsole set up to capture the
bug.
Root fs is on /dev/sdi1, and /dev/sdj2 is the card reader which was
the target of the untar.
[29571.448889] sd
2013 Oct 06
5
btrfs device delete problem
Hi,
I''m getting an error when trying to delete a device from a raid1 (data
and metadata mirrored).
> btrfs filesystem show
failed to read /dev/sr0
Label: none uuid: 78b5162b-489e-4de1-a989-a47b91adef50
Total devices 2 FS bytes used 107.64GB
devid 2 size 149.05GB used 109.01GB path /dev/sdh1
devid 1 size 156.81GB used 109.03GB path /dev/sdb6
Btrfs v0.20-rc1
>
2013 Aug 11
2
(un)mounting takes a long time
Hello!
I''m using ArchLinux with kernel Linux horus 3.10.5-1-ARCH #1 SMP PREEMPT.
Mounting and unmounting takes a long time:
# time mount -v /mnt/Archiv
mount: /dev/sde1 mounted on /mnt/Archiv.
mount -v /mnt/Archiv 0,00s user 0,16s system 1% cpu 9,493 total
# sync && time umount -v /mnt/Archiv
umount: /mnt/Archiv (/dev/sdd1) unmounted
umount -v /mnt/Archiv 0,00s user
2012 Mar 23
2
btrfs crash after disk reconnect
Observed on Linux 3.2.9 after the controller/disk flaked in-out.
(The world still needs a SCSI error decoding tool to tell normal people
what cmd and res are about.)
[ 157.732885] device label srv devid 4 transid 11292 /dev/sdf
[ 157.733201] btrfs: disk space caching is enabled
[ 172.936515] device label srv devid 4 transid 11292 /dev/sdf
[44106.091461] ata4.01: exception Emask 0x0 SAct 0x0
2012 May 06
4
btrfs-raid10 <-> btrfs-raid1 confusion
Greetings,
until yesterday I was running a btrfs filesystem across two 2.0 TiB
disks in RAID1 mode for both metadata and data without any problems.
As space was getting short I wanted to extend the filesystem by two
additional drives lying around, which both are 1.0 TiB in size.
Knowing little about the btrfs RAID implementation I thought I had to
switch to RAID10 mode, which I was told is