search for: 49f5

Displaying 7 results from an estimated 7 matches for "49f5".

Did you mean: 495
2011 Aug 14
3
cant mount degraded (it worked in kernel 2.6.38.8)
...x86_64 x86_64 x86_64 GNU/Linux mkdir test5 cd test5 dd if=/dev/null of=img5 bs=1 seek=2G dd if=/dev/null of=img6 bs=1 seek=2G losetup /dev/loop2 img5 losetup /dev/loop3 img6 mkfs.btrfs -d raid1 -m raid1 /dev/loop2 /dev/loop3 btrfs device scan btrfs filesystem show Label: none uuid: d7ba6c4e-04ed-49f5-88cd-8432c948e822 Total devices 2 FS bytes used 28.00KB devid 1 size 2.00GB used 437.50MB path /dev/loop4 devid 2 size 2.00GB used 417.50MB path /dev/loop5 mkdir dir mount -t btrfs /dev/loop2 dir umount dir losetup -d /dev/loop3 mount -t btrfs -o degraded /dev/loop2 d...
2017 Nov 15
1
unable to remove brick, pleas help
Hi, I am trying to remove a brick, from a server which is no longer part of the gluster pool, but I keep running into errors for which I cannot find answers on google. [root at virt2 ~]# gluster peer status Number of Peers: 3 Hostname: srv1 Uuid: 2bed7e51-430f-49f5-afbc-06f8cec9baeb State: Peer in Cluster (Disconnected) Hostname: srv3 Uuid: 0e78793c-deca-4e3b-a36f-2333c8f91825 State: Peer in Cluster (Connected) Hostname: srv4 Uuid: 1a6eedc6-59eb-4329-b091-2b9bc6f0834f State: Peer in Cluster (Connected) [root at virt2 ~]# [root at virt2 ~]# gluster volum...
2010 Jul 07
2
Bug#588310: Xen enabled kernel cannot find the / partition
...ba-2fdcd88eddc9 /boot ext4 defaults 0 2 /dev/mapper/vg0-tmp /tmp ext4 noatime,nodev,nosuid 0 2 /dev/mapper/vg0-usr /usr ext4 defaults 0 2 /dev/mapper/vg0-var /var ext4 noatime,nodev 0 2 UUID=05b534c0-49f5-4923-a5f3-a55314084c03 none swap sw 0 0 /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/sdb1 /media/usb0 auto rw,user,noauto 0 0 none /proc/xen xenfs defaults 0 0...
2012 May 08
6
registry vulnerabilities in R
...otocol=17|Profile=Public|RPort=137|RA4=LocalSubnet|RA6=LocalSubnet|App=System|Name=@FirewallAPI.dll,-28523|Desc=@FirewallAPI.dll,-28526|EmbedCtxt @FirewallAPI.dll,-28502|" HKEY_LOCAL_MACHINE\System\ControlSet001\services\SharedAccess\Parameters\FirewallPolicy\FirewallRules "{4B397AFB-D32D-49F5-9087-824DAC4F5E1E}"="v2.10|Action=Allow|Active=TRUE|Dir=In|Protocol=17|Profile=Public|LPort=137|RA4=LocalSubnet|RA6=LocalSubnet|App=System|Name=@FirewallAPI.dll,-28519|Desc=@FirewallAPI.dll,-28522|EmbedCtxt @FirewallAPI.dll,-28502|" HKEY_LOCAL_MACHINE\System\ControlSet001\services\Sh...
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed metadata selfheal on ae4355e4-6e2e-4910-9216-98ac7a5c18ac. sources=0 [2] sinks=1 [2017-10-25 10:40:29.295358] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on cc40f5c0-60c5-49f5-906b-ff1a1eed41a0. sources=0 [2] sinks=1 [2017-10-25 10:40:29.302995] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-home-replicate-0: performing metadata selfheal on 1fcffdcc-e22d-454e-aeab-16597ffee1f9 [2017-10-25 10:40:29.306096] I [MSGID: 108026] [afr-self-heal-c...