Displaying 8 results from an estimated 8 matches for "86a1".
Did you mean:
8601
2011 Aug 23
0
xe vm-export fails on debian squeeze
...3-8d7dfd64dbef filename=test.vhd
Error code: SR_BACKEND_FAILURE_46
Error parameters: , The VDI is not available [opterr=Command [''/usr/sbin/vhd-util'', ''scan'', ''-f'', ''-c'', ''-m'', ''VHD-a3ede265-e6af-4216-86a1-45f8283eae45'', ''-l'', ''VG_XenStorage-425877f7-3986-26fc-1254-35813e8c037c'', ''-a''] failed (22): ], /usr/lib/xen-common/xapi/sm/util.py:17: DeprecationWarning: The popen2 module is deprecated. Use the subprocess module.
import os, re, s...
2017 Jul 07
2
I/O error for one folder within the mountpoint
...2b2-3f2d-41ff-9cad-cd3b5a1e506a>
Status: Connected
Number of entries: 6
Brick ipvr8.xxx:/mnt/gluster-applicatif/brick
<gfid:47ddf66f-a5e9-4490-8cd7-88e8b812cdbd>
<gfid:8057d06e-5323-47ff-8168-d983c4a82475>
<gfid:5b2ea4e4-ce84-4f07-bd66-5a0e17edb2b0>
<gfid:baedf8a2-1a3f-4219-86a1-c19f51f08f4e>
<gfid:8261c22c-e85a-4d0e-b057-196b744f3558>
<gfid:842b30c1-6016-45bd-9685-6be76911bd98>
<gfid:1fcaef0f-c97d-41e6-87cd-cd02f197bf38>
<gfid:9d041c80-b7e4-4012-a097-3db5b09fe471>
<gfid:ff48a14a-c1d5-45c6-a52a-b3e2402d0316>
<gfid:01409b23-eff2-4bda-966e...
2016 Aug 09
2
Asterisk 11.23.0 on CentOS6 : how to get ICE support ?
...:15:50] a=rtpmap:105 CN/16000
[Aug 9 22:15:50] a=rtpmap:13 CN/8000
[Aug 9 22:15:50] a=rtpmap:126 telephone-event/8000
[Aug 9 22:15:50] a=maxptime:60
[Aug 9 22:15:50] a=ssrc:1885999682 cname:yLxCKvQLz0YJGRkR
[Aug 9 22:15:50] a=ssrc:1885999682
msid:BJSlrOtzPj6wzI3QugifY58Oi18zpEbkNsps
f0144e6c-86a1-4b08-bf58-4ced92361250
[Aug 9 22:15:50] a=ssrc:1885999682
mslabel:BJSlrOtzPj6wzI3QugifY58Oi18zpEbkNsps
[Aug 9 22:15:50] a=ssrc:1885999682
label:f0144e6c-86a1-4b08-bf58-4ced92361250
[Aug 9 22:15:50] <------------->
[Aug 9 22:15:50] --- (13 headers 40 lines) ---
[Aug 9 22:15:50] Using IN...
2017 Jul 07
0
I/O error for one folder within the mountpoint
...; Status: Connected
> Number of entries: 6
>
> Brick ipvr8.xxx:/mnt/gluster-applicatif/brick
> <gfid:47ddf66f-a5e9-4490-8cd7-88e8b812cdbd>
> <gfid:8057d06e-5323-47ff-8168-d983c4a82475>
> <gfid:5b2ea4e4-ce84-4f07-bd66-5a0e17edb2b0>
> <gfid:baedf8a2-1a3f-4219-86a1-c19f51f08f4e>
> <gfid:8261c22c-e85a-4d0e-b057-196b744f3558>
> <gfid:842b30c1-6016-45bd-9685-6be76911bd98>
> <gfid:1fcaef0f-c97d-41e6-87cd-cd02f197bf38>
> <gfid:9d041c80-b7e4-4012-a097-3db5b09fe471>
> <gfid:ff48a14a-c1d5-45c6-a52a-b3e2402d0316>
> &l...
2017 Jul 07
2
I/O error for one folder within the mountpoint
...Number of entries: 6
>>
>> Brick ipvr8.xxx:/mnt/gluster-applicatif/brick
>> <gfid:47ddf66f-a5e9-4490-8cd7-88e8b812cdbd>
>> <gfid:8057d06e-5323-47ff-8168-d983c4a82475>
>> <gfid:5b2ea4e4-ce84-4f07-bd66-5a0e17edb2b0>
>> <gfid:baedf8a2-1a3f-4219-86a1-c19f51f08f4e>
>> <gfid:8261c22c-e85a-4d0e-b057-196b744f3558>
>> <gfid:842b30c1-6016-45bd-9685-6be76911bd98>
>> <gfid:1fcaef0f-c97d-41e6-87cd-cd02f197bf38>
>> <gfid:9d041c80-b7e4-4012-a097-3db5b09fe471>
>> <gfid:ff48a14a-c1d5-45c6-a52a-b3e2...
2017 Jul 07
0
I/O error for one folder within the mountpoint
...t;>>
>>> Brick ipvr8.xxx:/mnt/gluster-applicatif/brick
>>> <gfid:47ddf66f-a5e9-4490-8cd7-88e8b812cdbd>
>>> <gfid:8057d06e-5323-47ff-8168-d983c4a82475>
>>> <gfid:5b2ea4e4-ce84-4f07-bd66-5a0e17edb2b0>
>>> <gfid:baedf8a2-1a3f-4219-86a1-c19f51f08f4e>
>>> <gfid:8261c22c-e85a-4d0e-b057-196b744f3558>
>>> <gfid:842b30c1-6016-45bd-9685-6be76911bd98>
>>> <gfid:1fcaef0f-c97d-41e6-87cd-cd02f197bf38>
>>> <gfid:9d041c80-b7e4-4012-a097-3db5b09fe471>
>>> <gfid:ff48a14a...
2017 Jul 07
0
I/O error for one folder within the mountpoint
On 07/07/2017 01:23 PM, Florian Leleu wrote:
>
> Hello everyone,
>
> first time on the ML so excuse me if I'm not following well the rules,
> I'll improve if I get comments.
>
> We got one volume "applicatif" on three nodes (2 and 1 arbiter), each
> following command was made on node ipvr8.xxx:
>
> # gluster volume info applicatif
>
> Volume
2017 Jul 07
2
I/O error for one folder within the mountpoint
Hello everyone,
first time on the ML so excuse me if I'm not following well the rules,
I'll improve if I get comments.
We got one volume "applicatif" on three nodes (2 and 1 arbiter), each
following command was made on node ipvr8.xxx:
# gluster volume info applicatif
Volume Name: applicatif
Type: Replicate
Volume ID: ac222863-9210-4354-9636-2c822b332504
Status: Started