similar to: PROBLEM: I/O stalls when running fstress against multi-device fs

Displaying 12 results from an estimated 12 matches similar to: "PROBLEM: I/O stalls when running fstress against multi-device fs"

2013 Aug 16
4
How btrfs resize should work ?
Hi, I am working on system storage manager (ssm) trying to implement btrfs resize correctly, however I have some troubles with it. # mkfs.btrfs /dev/sda /dev/sdb # mount /dev/sda /mnt/test # btrfs filesystem show failed to open /dev/sr0: No medium found Label: none uuid: 8dce5578-a2bc-416e-96fd-16a2f4f770b7 Total devices 2 FS bytes used 28.00KB devid 2 size 50.00GB used 2.01GB path
2011 Jan 12
1
Filesystem creation in "degraded mode"
I''ve had a go at determining exactly what happens when you create a filesystem without enough devices to meet the requested replication strategy: # mkfs.btrfs -m raid1 -d raid1 /dev/vdb # mount /dev/vdb /mnt # btrfs fi df /mnt Data: total=8.00MB, used=0.00 System, DUP: total=8.00MB, used=4.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=153.56MB, used=24.00KB Metadata:
2010 Jul 31
0
how to find out total capacity and raid level of btrfs file system
I know df does not report the correct size for btrfs raid systems. Is there any other way to find out the total capacity of a btrfs file system? Or at least the raid level for data / metadata? My test system is Ubuntu 10.10 alpha with btrfs 0.19 and a 2.6.35 kernel (don''t know which rc). # uname -a Linux ubuntu 2.6.35-12-generic #17-Ubuntu SMP Mon Jul 26 18:48:06 UTC 2010 x86_64
2017 Jul 07
0
I/O error for one folder within the mountpoint
What does the mount log say when you get the EIO error on snooper? Check if there is a gfid mismatch on snooper directory or the files under it for all 3 bricks. In any case the mount log or the glustershd.log of the 3 nodes for the gfids you listed below should give you some idea on why the files aren't healed. Thanks. On 07/07/2017 03:10 PM, Florian Leleu wrote: > > Hi Ravi, >
2012 Dec 13
22
[PATCH] Btrfs: fix a deadlock on chunk mutex
An user reported that he has hit an annoying deadlock while playing with ceph based on btrfs. Current updating device tree requires space from METADATA chunk, so we -may- need to do a recursive chunk allocation when adding/updating dev extent, that is where the deadlock comes from. If we use SYSTEM metadata to update device tree, we can avoid the recursive stuff. Reported-by: Jim Schutt
2017 Jul 07
0
I/O error for one folder within the mountpoint
On 07/07/2017 03:39 PM, Florian Leleu wrote: > > I guess you're right aboug gfid, I got that: > > [2017-07-07 07:35:15.197003] W [MSGID: 108008] > [afr-self-heal-name.c:354:afr_selfheal_name_gfid_mismatch_check] > 0-applicatif-replicate-0: GFID mismatch for > <gfid:3fa785b5-4242-4816-a452-97da1a5e45c6>/snooper > b9222041-72dd-43a3-b0ab-4169dbd9a87f on
2013 Feb 06
3
btrfs balance -> hang/crash
Hi, my btrfs "hangs" when doing a balance operation. I''m using a 3.7.1 kernel from opensuse: linux-opzz 3.7.1-2.10-m4 #11 SMP PREEMPT Fri Jan 11 18:04:04 CET 2013 x86_64 x86_64 x86_64 GNU/Linux and Btrfs v0.19+ I did a scrub which completed without errors. Then I tried "btrfs filesystem balance /" which work fine for the first 23 of 46 chunks, then ist stopped
2017 Jul 07
2
I/O error for one folder within the mountpoint
Hi Ravi, thanks for your answer, sure there you go: # gluster volume heal applicatif info Brick ipvr7.xxx:/mnt/gluster-applicatif/brick <gfid:e3b5ef36-a635-4e0e-bd97-d204a1f8e7ed> <gfid:f8030467-b7a3-4744-a945-ff0b532e9401> <gfid:def47b0b-b77e-4f0e-a402-b83c0f2d354b> <gfid:46f76502-b1d5-43af-8c42-3d833e86eb44> <gfid:d27a71d2-6d53-413d-b88c-33edea202cc2>
2012 Oct 25
46
[RFC] New attempt to a better "btrfs fi df"
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi all, this is a new attempt to improve the output of the command "btrfs fi df". The previous attempt received a good reception. However there was no a general consensus about the wording. Moreover I still didn''t understand how btrfs was using the disks. A my first attempt was to develop a new command which shows how the disks
2017 Jul 07
2
I/O error for one folder within the mountpoint
I guess you're right aboug gfid, I got that: [2017-07-07 07:35:15.197003] W [MSGID: 108008] [afr-self-heal-name.c:354:afr_selfheal_name_gfid_mismatch_check] 0-applicatif-replicate-0: GFID mismatch for <gfid:3fa785b5-4242-4816-a452-97da1a5e45c6>/snooper b9222041-72dd-43a3-b0ab-4169dbd9a87f on applicatif-client-1 and 60056f98-20f8-4949-a4ae-81cc1a139147 on applicatif-client-0 Can you
2013 Aug 24
10
Help interpreting RAID1 space allocation
I''ve created a test volume and copied a bulk of data to it, however the results of the space allocation are confusing at best. I''ve tried to capture the history of events leading up to the current state. This is all on a Debian Wheezy system using a 3.10.5 kernel package (linux-image-3.10-2-amd64) and btrfs tools v0.20-rc1 (Debian package 0.19+20130315-5). The host uses an
2013 Jan 03
33
Option LABEL
Hallo, linux-btrfs, please delete the option "-L" (for labelling) in "mkfs.btrfs", in some configurations it doesn''t work as expected. My usual way: mkfs.btrfs -d raid0 -m raid1 /dev/sdb /dev/sdc /dev/sdd ... One call for some devices. Wenn I add the option "-L mylabel" then each device gets the same label, and therefore some other programs