similar to: extremely slow syncing on btrfs with 2.6.39.1

Displaying 20 results from an estimated 90 matches similar to: "extremely slow syncing on btrfs with 2.6.39.1"

2012 Feb 26
0
"device delete" kills contents
Hallo, linux-btrfs, I''ve (once again) tried "add" and "delete". First, with 3 devices (partitions): mkfs.btrfs -d raid0 -m raid1 /dev/sdk1 /dev/sdl1 /dev/sdm1 Mounted (to /mnt/btr), filled with about 100 GByte data. Then btrfs device add /dev/sdj1 /mnt/btr results in # show Label: none uuid: 6bd7d4df-e133-47d1-9b19-3c7565428770 Total devices 4 FS bytes
2011 Aug 09
17
Re: Applications using fsync cause hangs for several seconds every few minutes
On 06/21/2011 01:15 PM, Jan Stilow wrote: > Hello, > > Nirbheek Chauhan <nirbheek <at> gentoo.org> writes: >> [...] >> >> Every few minutes, (I guess) when applications do fsync (firefox, >> xchat, vim, etc), all applications that use fsync() hang for several >> seconds, and applications that use general IO suffer extreme >> slowdowns.
1998 Jul 17
3
9GB Drives Show Up as 4GB
In NT Explorer, my 9GB usr shares show up as 4GB. Any suggestions? Doug Smith
2010 Oct 28
0
RAID0 limiting disk utilization
I noticed that if I have single-device allocation for data in a multi-device btrfs filesystem, a balance operation will convert the data to RAID0. This is true even if ''-d single'' is specified explicitly when creating the filesystem. Then it wants to continue using RAID0 for future data allocations, and I run out of space once there''s no longer two drives with space
2011 Apr 09
16
wrong values in "df" and "btrfs filesystem df"
Hallo, linux-btrfs, First I create an array of 2 disks with mkfs.btrfs -d raid0 -m raid1 /dev/sdb1 /dev/sdd1 and mount it at /srv/MM. Then I fill it with about 1,6 TByte. And then I add /dev/sde1 via btrfs device add /dev/sde1 /srv/MM btrfs filesystem balance /srv/MM (it run about 20 hours) Then I work on it, copy some new files, delete some old files - all works well. Only df
2011 Dec 28
13
fstrim on BTRFS
Hi! With 3.2-rc4 (probably earlier), Ext4 seems to remember what areas it trimmed: merkaba:~> fstrim -v /boot /boot: 224657408 bytes were trimmed merkaba:~> fstrim -v /boot /boot: 0 bytes were trimmed But BTRFS does not: merkaba:~> fstrim -v / /: 4431613952 bytes were trimmed merkaba:~> fstrim -v / /: 4341846016 bytes were trimmed Is it planned to add this feature to BTRFS
2011 Jun 02
5
Screen corruption and crash at boot with Xen 4.1.0 & linux 2.6.39 on some systems
I have a custom built system based on LFS 6.6 with xen 4.1.0 & linux 2.6.39 built from source. The system boots correctly on one system (after a problem with the USB disk has been worked around), however when I try to boot the same system on another machine the screen corrupts shortly after handover from the bootloader. This happens on 2 out of the 3 machines I have tried it on. I was
2011 Jan 12
1
Filesystem creation in "degraded mode"
I''ve had a go at determining exactly what happens when you create a filesystem without enough devices to meet the requested replication strategy: # mkfs.btrfs -m raid1 -d raid1 /dev/vdb # mount /dev/vdb /mnt # btrfs fi df /mnt Data: total=8.00MB, used=0.00 System, DUP: total=8.00MB, used=4.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=153.56MB, used=24.00KB Metadata:
2011 Jul 22
2
extlinux doesn't boot 3.0 kernel on a brand new HP 8200sff workstation
Hi, I compiled a 3.0 kernel for my system, used the same config of my already working 2.6.39.1 kernel on the same system. But whenever extlinux tries to load my 3.0 kernel it crashes instantly and reboots, not even 1 kernel msg is displayed. So it seems like the kernel isn't loaded at all and crashes. Rgds, /reni
2012 Oct 25
46
[RFC] New attempt to a better "btrfs fi df"
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi all, this is a new attempt to improve the output of the command "btrfs fi df". The previous attempt received a good reception. However there was no a general consensus about the wording. Moreover I still didn''t understand how btrfs was using the disks. A my first attempt was to develop a new command which shows how the disks
2012 May 05
5
Is it possible to reclaim block groups once they are allocated to data or metadata?
Hello list, recently reformatted my home partition from XFS to RAID1 btrfs. I used the default options to mkfs.btrfs except for enabling raid1 for data as well as metadata. Filesystem is made up of two 1TB drives. mike@mercury (0) pts/3 ~ $ sudo btrfs filesystem show Label: none uuid: f08a8896-e03e-4064-9b94-9342fb547e47 Total devices 2 FS bytes used 888.06GB devid 1 size 931.51GB used
2013 Apr 03
2
[bug] btrfs fi df doesn't show raid type after balance
Did something break.. ? we are not reporting raid type after balance. ----------- # btrfs fi df /btrfs Data, RAID0: total=2.00GB, used=2.03MB Data: total=8.00MB, used=0.00 System, RAID0: total=16.00MB, used=4.00KB System: total=4.00MB, used=0.00 Metadata, RAID0: total=2.00GB, used=216.00KB Metadata: total=8.00MB, used=4.00KB # btrfs bal /btrfs Done, had to relocate 5 out of 5 chunks # btrfs fi
2011 Mar 04
13
cannot replace c10t0d0 with c10t0d0: device is too small
In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz storage pool and then shelved the other two for spares. One of the disks failed last night so I shut down the server and replaced it with a spare. When I tried to zpool replace the disk I get: zpool replace tank c10t0d0 cannot replace c10t0d0 with c10t0d0: device is too small The 4 original disk partition tables look like
2013 Jan 12
4
obscure out of space, df and fi df are way off
Very low priority. No user data at risk. 8GB virtual disk being installed to, and the installer is puking. I''m trying to figure out why. I first get an rsync error 12, followed by the installer crashing. What''s interesting is this, deleting irrelevant source file systems, just showing the mounts for the installed system: [root@localhost tmp]# df Filesystem
2012 May 29
0
[btrfs-progs] btrfs fi df output
Hello, I have a question regarding "btrfs filesystem df"output. # btrfs fi df /mnt/test Data: total=3.01GB, used=512.19MB System, DUP: total=8.00MB, used=4.00KB System: total=4.00MB, used=0.00 <= What this means? For what is used? I''ve never seen this incremented Metadata, DUP: total=2.50GB, used=676.00KB Metadata: total=8.00MB, used=0.00
2012 Oct 04
8
[PATCH][BTRFS-PROGS][V3] btrfs filesystem df
Hi Chris, this serie of patches updated the command "btrfs filesystem df". I update this command because it is not so easy to get the information about the disk usage from the command "fi df" and "fi show". This patch was the result of some discussions on the btrfs mailing list. Many thanks to all the contributors. From the man page (see 2nd patch): [...] The
2012 Nov 22
0
raid10 data fs full after degraded mount
Hello, on a fs with 4 disks, raid 10 for data, one drive was failing and has been removed. After reboot and ''mount -o degraded...'', the fs looks full, even though before removal of the failed device it was almost 80% free. root@fs0:~# df -h /mnt/b Filesystem Size Used Avail Use% Mounted on /dev/sde 11T 2.5T 41M 100% /mnt/b root@fs0:~# btrfs fi df /mnt/b Data,
2012 Dec 17
5
Feeback on RAID1 feature of Btrfs
Hello, I''m testing Btrfs RAID1 feature on 3 disks of ~10GB. Last one is not exactly 10GB (would be too easy). About the test machine, it''s a kvm vm running an up-to-date archlinux with linux 3.7 and btrfs-progs 0.19.20121005. #uname -a Linux seblu-btrfs-1 3.7.0-1-ARCH #1 SMP PREEMPT Tue Dec 11 15:05:50 CET 2012 x86_64 GNU/Linux Filesystem was created with : # mkfs.btrfs -L
2013 Jun 05
8
btrfs raid1 on 16TB goes read-only after "btrfs: block rsv returned -28"
Dear Devs, I have x4 4TB HDDs formatted with: mkfs.btrfs -L bu-16TB_0 -d raid1 -m raid1 /dev/sd[cdef] /etc/fstab mounts with the options: noatime,noauto,space_cache,inode_cache All on kernel 3.8.13. Upon using rsync to copy some heavily hardlinked backups from ReiserFS, I''ve seen: The following "block rsv returned -28" is repeated 7 times until there is a call trace
2012 Jul 25
8
online increase of zfs after LUN increase ?
Hello, There is a feature of zfs (autoexpand, or zpool online -e ) that it can consume the increased LUN immediately and increase the zpool size. That would be a very useful ( vital ) feature in enterprise environment. Though when I tried to use it, it did not work. LUN expanded and visible in format, but zpool did not increase. I found a bug SUNBUG:6430818 (Solaris Does Not Automatically