similar to: A couple of questions

Displaying 20 results from an estimated 20000 matches similar to: "A couple of questions"

2009 Apr 03
10
btrfs for enterprise raid arrays
Dear all, While going through the archived mailing list and crawling along the wiki I didn''t find any clues if there would be any optimizations in Btrfs to make efficient use of functions and features that today exist on enterprise class storage arrays. One exception to that was the ssd option which I think can make a improvement on read and write IO''s however when attached to
2013 Oct 19
13
[PATCH] Btrfs: fix race condition between writting and scrubing supers
From: Wang Shilong <wangsl.fnst@cn.fujitsu.com> Scrubing supers is not in a transaction context, when trying to write supers to disk, we should check if we are trying to scrub supers.Fix it. Signed-off-by: Wang Shilong <wangsl.fnst@cn.fujitsu.com> --- fs/btrfs/disk-io.c | 2 ++ fs/btrfs/transaction.c | 2 ++ 2 files changed, 4 insertions(+) diff --git a/fs/btrfs/disk-io.c
2013 Sep 23
12
balance induced csum errors
SAMSUNG SSD 830 Series CPU0: Intel® Core(TM) i7-2820QM CPU @ 2.30GHz (fam: 06, model: 2a, stepping: 07) 8GB RAM (quite heavily tested, not recently, with several days of memtest) kernel 3.11.1-200.fc19.x86_64 running on baremetal btrfs-progs-0.20.rc1.20130308git704a08c-1.fc19.x86_64 Today I did a scrub on a btrfs volume, with no message or errors in console or dmesg or journal. Immediately after
2012 Oct 27
7
How does btrfs behave on checksum mismatch?
I came across the tidbit that ZFS has a contract guarantee that the data read back will either be correct (the checksum computed over the data read from the disk matches the checksum stored on disk), or you get an I/O error. Obviously, this greatly reduces the probability that the data is invalid. (Particularly when taken in combination with the disk firmware''s own ECC and checksumming.)
2011 Nov 23
2
stripe alignment consideration for btrfs on RAID5
Hiya, is there any recommendation out there to setup a btrfs FS on top of hardware or software raid5 or raid6 wrt stripe/stride alignment? From mkfs.btrfs, it doesn''t look like there''s much that can be adjusted that would help, and what I''m asking might not even make sense for btrfs but I thought I''d just ask. Thanks, Stephane -- To unsubscribe from this
2013 Jun 03
3
csum failed during rebalance
Hi, I added a new drive to an existing RAID 0 array. Every attempt to rebalance the array fails: # btrfs filesystem balance /share/bd8 ERROR: error during balancing ''/share/bd8'' - Input/output error # dmesg | tail btrfs: found 1 extents btrfs: relocating block group 10752513540096 flags 1 btrfs: found 5 extents btrfs: found 5 extents btrfs: relocating block group 10751439798272
2013 Sep 22
10
[PATCH] Btrfs: fix sync fs to actually wait for all data to be persisted
Currently the fs sync function (super.c:btrfs_sync_fs()) doesn''t wait for delayed work to finish before returning success to the caller. This change fixes this, ensuring that there''s no data loss if a power failure happens right after fs sync returns success to the caller and before the next commit happens. Steps to reproduce the data loss issue: $ mkfs.btrfs -f /dev/sdb3 $
2012 Aug 22
1
interaction with hardware RAID?
It is well documented that btrfs data recovery (after silent corruption) is dependent on the use of btrfs''s own RAID1. However, I''m curious about whether any hardware RAID vendors are contemplating ways to integrate more closely with btrfs, for example, such that when btrfs detects a bad checksum, it would be able to ask the hardware RAID controller to return all alternate
2012 Apr 01
19
cross-subvolume cp --reflink
Glück Auf! I know its been discussed more then ones, but as a user I really would like to see the patch for allowing this in the kernel. Some users tested this patch successfully for weeks or months in 2 or 3 kernel versions since then, true? I''d say by creating a snapshot, it''s nothing else in the end. More then one file or tree sharing the same data on disc, or am I wrong?
2012 Mar 06
4
Understanding metadata efficiency of btrfs
I''ve run a little wired benchmark on comparing Btrfs v0.19 and XFS: There are 2000 directories and each directory contains 1000 files. The workload randomly stat a file or chmod a file for 2000000 times. And the number of stat and chmod are 50% and 50%. I monitor the number of disk read requests #Disk Write Requests, #Disk Read Requests, #Disk Write Sectors, #Disk Read
2012 May 25
6
[PATCH v5 0/3] Btrfs: add IO error device stats
Changes v1-v2: - Remove restriction that BTRFS_IOC_GET_DEVICE_STATS is a privileged operation - Cast u64 to unsigned long long for printf() Changes v2-v3: - Rebased on Chris'' current master Changes v3-v4: - Add padding at end of ioctl structure Changes v4-v5: - The statistic members in the ioctl are now organized as an array of 64 bit values. Symbolic names for the array indexes
2012 May 03
1
[PATCH] Btrfs: fix crash in scrub repair code when device is missing
Fix that when scrub tries to repair an I/O or checksum error and one of the devices containing the mirror is missing, it crashes in bio_add_page because the bdev is a NULL pointer for missing devices. Reported-by: Marco L. Crociani <marco.crociani@gmail.com> Signed-off-by: Stefan Behrens <sbehrens@giantdisaster.de> --- fs/btrfs/scrub.c | 7 +++++++ 1 file changed, 7 insertions(+)
2008 Aug 05
31
Btrfs v0.16 released
Hello everyone, Btrfs v0.16 is available for download, please see http://btrfs.wiki.kernel.org/ for download links and project information. v0.16 has a shiny new disk format, and is not compatible with filesystems created by older Btrfs releases. But, it should be the fastest Btrfs yet, with a wide variety of scalability fixes and new features. There were quite a few contributors this time
2008 Dec 09
17
Data De-duplication
Hi, Say I download a large file from the net to /mnt/a.iso. I then download the same file again to /mnt/b.iso. These files now have the same content, but are stored twice since the copies weren''t made with the bcp utility. The same occurs if a directory tree with duplicate files (created with bcp) is put through a non-aware program - for example tarred and then untarred again. This
2012 Dec 17
5
Feeback on RAID1 feature of Btrfs
Hello, I''m testing Btrfs RAID1 feature on 3 disks of ~10GB. Last one is not exactly 10GB (would be too easy). About the test machine, it''s a kvm vm running an up-to-date archlinux with linux 3.7 and btrfs-progs 0.19.20121005. #uname -a Linux seblu-btrfs-1 3.7.0-1-ARCH #1 SMP PREEMPT Tue Dec 11 15:05:50 CET 2012 x86_64 GNU/Linux Filesystem was created with : # mkfs.btrfs -L
2011 Dec 09
10
[PATCH 0/3] Btrfs: add IO error device stats
The goal is to detect when drives start to get an increased error rate, when drives should be replaced soon. Therefore statistic counters are added that count IO errors (read, write and flush). Additionally, the software detected errors like checksum errors and corrupted blocks are counted. An ioctl interface is added to get the device statistic counters. A second ioctl is added to atomically get
2013 Oct 27
2
Error from Trying to Mount Btrfs
I have the attached error from trying to mount btrfs on external hard drive. The F.S. was my primary system, then I dd''d it to an external and reinstalled Fedora. I tried to follow https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#Filesystem_can.27t_be_mounted_by_label. I used "# btrfs device scan --all-devices" before attempting to mount.   What should I do?
2015 Feb 11
1
CentOS 7 : create RAID arrays manually using mdadm --create ?
On 2/10/2015 6:54 PM, Chris Murphy wrote: > Why I avoid swap on md raid 1/10 is because of the swap caveats listed > under man 4 md. Is possible for a page in memory to change between the > writes to the two md devices such that the mirrors are in fact > different. The man page only suggests this makes scrub check results > unreliable, and that such a difference wouldn't be read
2009 Feb 19
8
RFE for two-level ZFS
Should I file an RFE for this addition to ZFS? The concept would be to run ZFS on a file server, exporting storage to an application server where ZFS also runs on top of that storage. All storage management would take place on the file server, where the physical disks reside. The application server would still perform end-to-end error checking but would notify the file server when it detected
2012 Aug 20
6
btrfs and mdadm raid 6
Hi. I''m considering an imminent switch from ext4 to btrfs and I''m hoping that someone can lend me advice before I do something unsupported. I have a software raid 6 array configured via mdadm. It was sitting at 8 x 3TB until I recently doubled that, grew the array and found that ext4 doesn''t want to resize. So, I''m looking to: 1. convert from ext4 to btrfs