search for: 425b

Displaying 5 results from an estimated 5 matches for "425b".

Did you mean: 425
2010 Nov 02
0
raid0 corruption, how to restore?
...irst experience with oops. After two days, the btrfs raid failed to mount via fstab and when I manually tried to mount it, there was a kernel panic. I would like to recover the data from the btrfs raid. If I issue the command: btrfs-show, I receive: Label: HD103SJ_btrfs uuid: 2ff6e89d-b790-425b-837b-475d1504be69 Total devices 2 FS bytes used 733.80GB devid 1 size 931.51GB used 369.53GB path /dev/sdd devid 2 size 931.51GB used 369.51GB path /dev/sde Btrfs Btrfs v0.19 Obviously the RAID and the filesystems are still there. I then try to fsck the drives:...
2014 Mar 01
0
Rails controller problems
...n email to rubyonrails-talk+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org To post to this group, send email to rubyonrails-talk-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org To view this discussion on the web visit https://groups.google.com/d/msgid/rubyonrails-talk/18a42aa1-a4fb-425b-a182-1db5faff0a6f%40googlegroups.com. For more options, visit https://groups.google.com/groups/opt_out.
2018 Feb 09
0
self-heal trouble after changing arbiter brick
...40e2-8dc0-a26f8faa5628> <gfid:fa4185b0-e5ab-4fdc-9dca-cb6ba33dcc8d> <gfid:8b2cf4bf-8c2a-465e-8f28-3e9a7f517268> <gfid:13925c48-fda4-40bd-bfcb-d7ced99b82b2> <gfid:292e3a0e-7114-4c97-b688-e94503047b58> <gfid:a52d1173-e034-4b57-9170-a7c91cbe2904> <gfid:5c830c7b-97b7-425b-9ab2-761ef2f41e88> <gfid:420c76a8-1598-4136-9c77-88c8d59d24e7> <gfid:ea6dbca2-f7e3-4015-ae34-04e8bf31fd4f> ... And so forth. Out of 80k+ lines, less than just 200 are not related to gfids (and yes, number of gfids is well beyond 64999): # grep -c gfid heal-info.fpack 80578 # grep...
2018 Feb 09
1
self-heal trouble after changing arbiter brick
...40e2-8dc0-a26f8faa5628> <gfid:fa4185b0-e5ab-4fdc-9dca-cb6ba33dcc8d> <gfid:8b2cf4bf-8c2a-465e-8f28-3e9a7f517268> <gfid:13925c48-fda4-40bd-bfcb-d7ced99b82b2> <gfid:292e3a0e-7114-4c97-b688-e94503047b58> <gfid:a52d1173-e034-4b57-9170-a7c91cbe2904> <gfid:5c830c7b-97b7-425b-9ab2-761ef2f41e88> <gfid:420c76a8-1598-4136-9c77-88c8d59d24e7> <gfid:ea6dbca2-f7e3-4015-ae34-04e8bf31fd4f> ... And so forth. Out of 80k+ lines, less than just 200 are not related to gfids (and yes, number of gfids is well beyond 64999): # grep -c gfid heal-info.fpack 80578 # grep...
2018 Feb 08
5
self-heal trouble after changing arbiter brick
Hi folks, I'm troubled moving an arbiter brick to another server because of I/O load issues. My setup is as follows: # gluster volume info Volume Name: myvol Type: Distributed-Replicate Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gv0:/data/glusterfs Brick2: gv1:/data/glusterfs Brick3: