Displaying 16 results from an estimated 16 matches for "4f9b".
Did you mean:
4f4b
2013 Jun 21
7
IRB help
...n email to rubyonrails-talk+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
To post to this group, send email to rubyonrails-talk-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
To view this discussion on the web visit https://groups.google.com/d/msgid/rubyonrails-talk/eeef5f0a-4225-4f9b-9ac0-30aafb351ff0%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
2017 Jul 31
3
gluster volume 3.10.4 hangs
Hi folks,
I'm running a simple gluster setup with a single volume replicated at two servers, as follows:
Volume Name: gv0
Type: Replicate
Volume ID: dd4996c0-04e6-4f9b-a04e-73279c4f112b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: sst0:/var/glusterfs
Brick2: sst2:/var/glusterfs
Options Reconfigured:
cluster.self-heal-daemon: enable
performance.readdir-ahead: on
nfs.disable: on
transport.address-family: inet
Th...
2017 Sep 25
2
how to verify bitrot signed file manually?
...d0bd4
> trusted.bit-rot.version=0x020000000000000058e4f3b40006793d
> trusted.ec.config=0x0000080a02000200
> trusted.ec.dirty=0x00000000000000000000000000000000
> trusted.ec.size=0x0000000718996701
> trusted.ec.version=0x0000000000038c4c0000000000038c4d
> trusted.gfid=0xf078a24134fe4f9bb953eca8c28dea9a
>
> output scrub log:
> [2017-09-02 13:02:20.311160] A [MSGID: 118023] [bit-rot-scrub.c:244:bitd_compare_ckum]
> 0-qubevaultdr-bit-rot-0: CORRUPTION DETECTED: Object /file-1 {Brick:
> /media/disk16/brick16 | GFID: f078a241-34fe-4f9b-b953-eca8c28dea9a}
> [2017-09-02...
2017 Sep 22
0
how to verify bitrot signed file manually?
...1768ce2df905ded1668f665e06eca2d0bd4
trusted.bit-rot.version=0x020000000000000058e4f3b40006793d
trusted.ec.config=0x0000080a02000200
trusted.ec.dirty=0x00000000000000000000000000000000
trusted.ec.size=0x0000000718996701
trusted.ec.version=0x0000000000038c4c0000000000038c4d
trusted.gfid=0xf078a24134fe4f9bb953eca8c28dea9a
output scrub log:
[2017-09-02 13:02:20.311160] A [MSGID: 118023]
[bit-rot-scrub.c:244:bitd_compare_ckum] 0-qubevaultdr-bit-rot-0: CORRUPTION
DETECTED: Object /file-1 {Brick: /media/disk16/brick16 | GFID:
f078a241-34fe-4f9b-b953-eca8c28dea9a}
[2017-09-02 13:02:20.311579] A [MSGID: 1...
2017 Sep 21
2
how to verify bitrot signed file manually?
Hi,
I have a file in my brick which was signed by bitrot and latter when
running scrub it was marked as bad.
Now, I want to verify file again manually. just to clarify my doubt
how can I do this?
regards
Amudhan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170921/f69ff7be/attachment.html>
2017 Oct 03
1
how to verify bitrot signed file manually?
...000000000000058e4f3b40006793d
>>> trusted.ec.config=0x0000080a02000200
>>> trusted.ec.dirty=0x00000000000000000000000000000000
>>> trusted.ec.size=0x0000000718996701
>>> trusted.ec.version=0x0000000000038c4c0000000000038c4d
>>> trusted.gfid=0xf078a24134fe4f9bb953eca8c28dea9a
>>>
>>> output scrub log:
>>> [2017-09-02 13:02:20.311160] A [MSGID: 118023]
>>> [bit-rot-scrub.c:244:bitd_compare_ckum] 0-qubevaultdr-bit-rot-0:
>>> CORRUPTION DETECTED: Object /file-1 {Brick: /media/disk16/brick16 | GFID:
>>> f...
2017 Sep 29
1
how to verify bitrot signed file manually?
...it-rot.version=0x020000000000000058e4f3b40006793d
>> trusted.ec.config=0x0000080a02000200
>> trusted.ec.dirty=0x00000000000000000000000000000000
>> trusted.ec.size=0x0000000718996701
>> trusted.ec.version=0x0000000000038c4c0000000000038c4d
>> trusted.gfid=0xf078a24134fe4f9bb953eca8c28dea9a
>>
>> output scrub log:
>> [2017-09-02 13:02:20.311160] A [MSGID: 118023]
>> [bit-rot-scrub.c:244:bitd_compare_ckum] 0-qubevaultdr-bit-rot-0:
>> CORRUPTION DETECTED: Object /file-1 {Brick: /media/disk16/brick16 | GFID:
>> f078a241-34fe-4f9b-b953-e...
2013 Oct 29
1
lpxelinux.0 - 6.02 - failed to load ldlinux.c32
.......
!PXE entry point found (we hope) at 9073:00F6 via plan A
UNDI code segment at 9073 len 216A
UNDI data segment at 9059 len 01A0
UNDI: baseio ec00 int 11 MTU 1500 type 1 "DIX+802.3" flags 0x81b
Getting cached Packet 01 02 03
MY IP addr seems to be 172.31.126.10
UNDI : IRQ11(0x73):0a63:4f9b -> 0000:8de2
Mac: Addr: 80 ee 73 18 73 f0
Hope there are some infos which can help..
Thanks and bye roman
Btw.. what are those "flags" 8-)
-----Urspr?ngliche Nachricht-----
Von: Syslinux [mailto:syslinux-bounces at zytor.com] Im Auftrag von Gene Cumm
Gesendet: Samstag, 26. Oktober...
2017 Nov 06
0
how to verify bitrot signed file manually?
...t;>> trusted.ec.config=0x0000080a02000200
>>>>> trusted.ec.dirty=0x00000000000000000000000000000000
>>>>> trusted.ec.size=0x0000000718996701
>>>>> trusted.ec.version=0x0000000000038c4c0000000000038c4d
>>>>> trusted.gfid=0xf078a24134fe4f9bb953eca8c28dea9a
>>>>>
>>>>> output scrub log:
>>>>> [2017-09-02 13:02:20.311160] A [MSGID: 118023]
>>>>> [bit-rot-scrub.c:244:bitd_compare_ckum] 0-qubevaultdr-bit-rot-0:
>>>>> CORRUPTION DETECTED: Object /file-1 {Brick: /med...
2017 Aug 02
0
gluster volume 3.10.4 hangs
...ttp://webkontrol.ru/)
+7 916 172 6 170
August 1, 2017 12:29 AM, "WK" wrote:
On 7/31/2017 1:12 AM, Seva Gluschenko wrote: Hi folks,
I'm running a simple gluster setup with a single volume replicated at two servers, as follows:
Volume Name: gv0
Type: Replicate
Volume ID: dd4996c0-04e6-4f9b-a04e-73279c4f112b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp The problem is, when it happened that one of replica servers hung, it caused the whole glusterfs to hang.
Yes, you lost quorum and the system doesn't want you to get a split-brain.
Could you...
2010 Sep 20
0
Routed Xen HVM on Centos 5.5 64bit
...I get this:
domid: 14
qemu: the number of cpus is 1
Watching /local/domain/14/logdirty/next-active
Watching /local/domain/0/device-model/14/command
char device redirected to /dev/pts/1
qemu_map_cache_init nr_buckets = 10000
shared page at pfn 7ffff
buffered io page at pfn 7fffd
xs_read(/vm/e3b3d593-4f9b-2c6f-b951-9c2af99d2ecf/rtc/timeoffset): read error
I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
Triggered log-dirty buffer switch
but again,
Error: Device 0 (vif) could not be connected.
/etc/xen/scripts/vif-route failed; error detected.
That''s a start...
Searchi...
2017 Oct 19
3
gluster tiering errors
...er_migrate_using_query_file]
0-<vol>-tier-dht: Promotion failed for <file>(gfid:f95f17bf-b696-
44cd-aae0-d8ac38149aa5)
[2017-10-16 16:06:06.880522] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file]
0-<vol>-tier-dht: Promotion failed for <file>(gfid:ec451f6c-8971-
4f9b-a04f-00f96db9b46a)
[2017-10-16 16:06:08.062080] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_file]
0-<vol>-tier-dht: Promotion failed for <file>(gfid:e658cd70-3f6d-
4b25-8d9f-0d4c24d3ec5d)
[2017-10-16 16:06:08.288298] I [MSGID: 109038]
[tier.c:1169:tier_migrate_using_query_fi...
2017 Oct 22
0
gluster tiering errors
...uery_file] 0-<vol>-tier-dht: Promotion
> failed for <file>(gfid:f95f17bf-b696-44cd-aae0-d8ac38149aa5)
> [2017-10-16 16:06:06.880522] I [MSGID: 109038]
> [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion
> failed for <file>(gfid:ec451f6c-8971-4f9b-a04f-00f96db9b46a)
> [2017-10-16 16:06:08.062080] I [MSGID: 109038]
> [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion
> failed for <file>(gfid:e658cd70-3f6d-4b25-8d9f-0d4c24d3ec5d)
> [2017-10-16 16:06:08.288298] I [MSGID: 109038]
> [tier.c:1169:tie...
2017 Oct 22
1
gluster tiering errors
...;vol>-tier-dht: Promotion
>> failed for <file>(gfid:f95f17bf-b696-44cd-aae0-d8ac38149aa5)
>> [2017-10-16 16:06:06.880522] I [MSGID: 109038]
>> [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion
>> failed for <file>(gfid:ec451f6c-8971-4f9b-a04f-00f96db9b46a)
>> [2017-10-16 16:06:08.062080] I [MSGID: 109038]
>> [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion
>> failed for <file>(gfid:e658cd70-3f6d-4b25-8d9f-0d4c24d3ec5d)
>> [2017-10-16 16:06:08.288298] I [MSGID: 109038]
>&...
2017 Oct 24
2
gluster tiering errors
...;vol>-tier-dht: Promotion
>> failed for <file>(gfid:f95f17bf-b696-44cd-aae0-d8ac38149aa5)
>> [2017-10-16 16:06:06.880522] I [MSGID: 109038]
>> [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion
>> failed for <file>(gfid:ec451f6c-8971-4f9b-a04f-00f96db9b46a)
>> [2017-10-16 16:06:08.062080] I [MSGID: 109038]
>> [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion
>> failed for <file>(gfid:e658cd70-3f6d-4b25-8d9f-0d4c24d3ec5d)
>> [2017-10-16 16:06:08.288298] I [MSGID: 109038]
>&...
2017 Oct 27
0
gluster tiering errors
...t: Promotion
>>> failed for <file>(gfid:f95f17bf-b696-44cd-aae0-d8ac38149aa5)
>>> [2017-10-16 16:06:06.880522] I [MSGID: 109038]
>>> [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion
>>> failed for <file>(gfid:ec451f6c-8971-4f9b-a04f-00f96db9b46a)
>>> [2017-10-16 16:06:08.062080] I [MSGID: 109038]
>>> [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion
>>> failed for <file>(gfid:e658cd70-3f6d-4b25-8d9f-0d4c24d3ec5d)
>>> [2017-10-16 16:06:08.288298] I [MSGI...