search for: c1d5

Displaying 8 results from an estimated 8 matches for "c1d5".

Did you mean: c15
2009 Jan 09
0
PROBLEM: I/O stalls when running fstress against multi-device fs
...e web-accessible backtraces - please let me know if you''d like more information, etc. Thanks, Eric commit: 755efdc3c4d3b42d5ffcef0f4d6e5b37ecd3bf21 uname -a: Linux bl465cb.lnx.usa.hp.com 2.6.28-btrfs-unstable #1 SMP Thu Jan 8 14:34:46 EST 2009 x86_64 GNU/Linux mounted as: /dev/cciss/c1d5 on /mnt type btrfs (rw) btrfs-show: Label: none uuid: 6c4ea7e8-1e68-4fb6-aa99-254a67ea81f2 Total devices 6 FS bytes used 4.09GB devid 4 size 68.33GB used 3.00GB path /dev/cciss/c1d3 devid 1 size 68.33GB used 2.02GB path /dev/cciss/c1d0 devid 5 size 68.33GB used 2.01GB path /dev/cciss...
2017 Jul 07
2
I/O error for one folder within the mountpoint
...ce84-4f07-bd66-5a0e17edb2b0> <gfid:baedf8a2-1a3f-4219-86a1-c19f51f08f4e> <gfid:8261c22c-e85a-4d0e-b057-196b744f3558> <gfid:842b30c1-6016-45bd-9685-6be76911bd98> <gfid:1fcaef0f-c97d-41e6-87cd-cd02f197bf38> <gfid:9d041c80-b7e4-4012-a097-3db5b09fe471> <gfid:ff48a14a-c1d5-45c6-a52a-b3e2402d0316> <gfid:01409b23-eff2-4bda-966e-ab6133784001> <gfid:c723e484-63fc-4267-b3f0-4090194370a0> <gfid:fb1339a8-803f-4e29-b0dc-244e6c4427ed> <gfid:056f3bba-6324-4cd8-b08d-bdf0fca44104> <gfid:a8f6d7e5-0ff2-4747-89f3-87592597adda> <gfid:3f6438a0-2712...
2017 Jul 07
0
I/O error for one folder within the mountpoint
...; > <gfid:baedf8a2-1a3f-4219-86a1-c19f51f08f4e> > <gfid:8261c22c-e85a-4d0e-b057-196b744f3558> > <gfid:842b30c1-6016-45bd-9685-6be76911bd98> > <gfid:1fcaef0f-c97d-41e6-87cd-cd02f197bf38> > <gfid:9d041c80-b7e4-4012-a097-3db5b09fe471> > <gfid:ff48a14a-c1d5-45c6-a52a-b3e2402d0316> > <gfid:01409b23-eff2-4bda-966e-ab6133784001> > <gfid:c723e484-63fc-4267-b3f0-4090194370a0> > <gfid:fb1339a8-803f-4e29-b0dc-244e6c4427ed> > <gfid:056f3bba-6324-4cd8-b08d-bdf0fca44104> > <gfid:a8f6d7e5-0ff2-4747-89f3-87592597adda&g...
2017 Jul 07
2
I/O error for one folder within the mountpoint
...f8a2-1a3f-4219-86a1-c19f51f08f4e> >> <gfid:8261c22c-e85a-4d0e-b057-196b744f3558> >> <gfid:842b30c1-6016-45bd-9685-6be76911bd98> >> <gfid:1fcaef0f-c97d-41e6-87cd-cd02f197bf38> >> <gfid:9d041c80-b7e4-4012-a097-3db5b09fe471> >> <gfid:ff48a14a-c1d5-45c6-a52a-b3e2402d0316> >> <gfid:01409b23-eff2-4bda-966e-ab6133784001> >> <gfid:c723e484-63fc-4267-b3f0-4090194370a0> >> <gfid:fb1339a8-803f-4e29-b0dc-244e6c4427ed> >> <gfid:056f3bba-6324-4cd8-b08d-bdf0fca44104> >> <gfid:a8f6d7e5-0ff2-4747...
2017 Jul 07
0
I/O error for one folder within the mountpoint
...c19f51f08f4e> >>> <gfid:8261c22c-e85a-4d0e-b057-196b744f3558> >>> <gfid:842b30c1-6016-45bd-9685-6be76911bd98> >>> <gfid:1fcaef0f-c97d-41e6-87cd-cd02f197bf38> >>> <gfid:9d041c80-b7e4-4012-a097-3db5b09fe471> >>> <gfid:ff48a14a-c1d5-45c6-a52a-b3e2402d0316> >>> <gfid:01409b23-eff2-4bda-966e-ab6133784001> >>> <gfid:c723e484-63fc-4267-b3f0-4090194370a0> >>> <gfid:fb1339a8-803f-4e29-b0dc-244e6c4427ed> >>> <gfid:056f3bba-6324-4cd8-b08d-bdf0fca44104> >>> <gfi...
2006 Jul 17
11
ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?
Hi All, I''ve just built an 8 disk zfs storage box, and I''m in the testing phase before I put it into production. I''ve run into some unusual results, and I was hoping the community could offer some suggestions. I''ve bascially made the switch to Solaris on the promises of ZFS alone (yes I''m that excited about it!), so naturally I''m looking
2017 Jul 07
0
I/O error for one folder within the mountpoint
On 07/07/2017 01:23 PM, Florian Leleu wrote: > > Hello everyone, > > first time on the ML so excuse me if I'm not following well the rules, > I'll improve if I get comments. > > We got one volume "applicatif" on three nodes (2 and 1 arbiter), each > following command was made on node ipvr8.xxx: > > # gluster volume info applicatif > > Volume
2017 Jul 07
2
I/O error for one folder within the mountpoint
Hello everyone, first time on the ML so excuse me if I'm not following well the rules, I'll improve if I get comments. We got one volume "applicatif" on three nodes (2 and 1 arbiter), each following command was made on node ipvr8.xxx: # gluster volume info applicatif Volume Name: applicatif Type: Replicate Volume ID: ac222863-9210-4354-9636-2c822b332504 Status: Started