Displaying 8 results from an estimated 8 matches for "9bff".
Did you mean:
5bff
2010 Apr 28
1
peth0 unavailable for virt-manager virt-install
...e
xen-3.0.3-94.el5
libvirt-0.6.3-20.1.el5_4
virt-manager-0.6.1-8.el5
xen-libs-3.0.3-94.el5
kernel
2.6.18-164.15.1.el5xen
peth0 is there:
eth0 Link encap:Ethernet HWaddr 00:21:9B:9B:A7:56
inet addr:10.202.3.66 Bcast:10.202.255.255 Mask:255.255.0.0
inet6 addr: fe80::221:9bff:fe9b:a756/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3452114 errors:0 dropped:0 overruns:0 frame:0
TX packets:1924908 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4900895721 (4.5 GiB) T...
2010 Apr 28
0
3.0.3-94.el5_4.[3/2] - no network options
...virt-python 0.6.3-20.1.el5_4
python-virtinst 0.400.3-5.el5
qemu 0.9.0-4
xen-libs 3.0.3-94.el5_4.3
xen-libs 3.0.3-94.el5_4.3
# ifconfig
eth0 Link encap:Ethernet HWaddr 00:21:9B:9B:A7:56
inet addr:10.202.3.66 Bcast:10.202.255.255 Mask:255.255.0.0
inet6 addr: fe80::221:9bff:fe9b:a756/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:44930 errors:0 dropped:0 overruns:0 frame:0
TX packets:32550 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:32144094 (30.6 MiB) TX byt...
2017 Jul 07
2
I/O error for one folder within the mountpoint
...a52a-b3e2402d0316>
<gfid:01409b23-eff2-4bda-966e-ab6133784001>
<gfid:c723e484-63fc-4267-b3f0-4090194370a0>
<gfid:fb1339a8-803f-4e29-b0dc-244e6c4427ed>
<gfid:056f3bba-6324-4cd8-b08d-bdf0fca44104>
<gfid:a8f6d7e5-0ff2-4747-89f3-87592597adda>
<gfid:3f6438a0-2712-4a09-9bff-d5a3027362b4>
<gfid:392c8e2f-9da4-4af8-a387-bfdfea2f404e>
<gfid:37e1edfd-9f58-4da3-8abe-819670c70906>
<gfid:15b7cdb3-aae8-4ca5-b28c-e87a3e599c9b>
<gfid:1d087e51-fb40-4606-8bb5-58936fb11a4c>
<gfid:bb0352b9-4a5e-4075-9179-05c3a5766cf4>
<gfid:40133fcf-a1fb-4d60-b169...
2017 Jul 07
0
I/O error for one folder within the mountpoint
...;gfid:01409b23-eff2-4bda-966e-ab6133784001>
> <gfid:c723e484-63fc-4267-b3f0-4090194370a0>
> <gfid:fb1339a8-803f-4e29-b0dc-244e6c4427ed>
> <gfid:056f3bba-6324-4cd8-b08d-bdf0fca44104>
> <gfid:a8f6d7e5-0ff2-4747-89f3-87592597adda>
> <gfid:3f6438a0-2712-4a09-9bff-d5a3027362b4>
> <gfid:392c8e2f-9da4-4af8-a387-bfdfea2f404e>
> <gfid:37e1edfd-9f58-4da3-8abe-819670c70906>
> <gfid:15b7cdb3-aae8-4ca5-b28c-e87a3e599c9b>
> <gfid:1d087e51-fb40-4606-8bb5-58936fb11a4c>
> <gfid:bb0352b9-4a5e-4075-9179-05c3a5766cf4>
> &l...
2017 Jul 07
2
I/O error for one folder within the mountpoint
...4bda-966e-ab6133784001>
>> <gfid:c723e484-63fc-4267-b3f0-4090194370a0>
>> <gfid:fb1339a8-803f-4e29-b0dc-244e6c4427ed>
>> <gfid:056f3bba-6324-4cd8-b08d-bdf0fca44104>
>> <gfid:a8f6d7e5-0ff2-4747-89f3-87592597adda>
>> <gfid:3f6438a0-2712-4a09-9bff-d5a3027362b4>
>> <gfid:392c8e2f-9da4-4af8-a387-bfdfea2f404e>
>> <gfid:37e1edfd-9f58-4da3-8abe-819670c70906>
>> <gfid:15b7cdb3-aae8-4ca5-b28c-e87a3e599c9b>
>> <gfid:1d087e51-fb40-4606-8bb5-58936fb11a4c>
>> <gfid:bb0352b9-4a5e-4075-9179-05c3...
2017 Jul 07
0
I/O error for one folder within the mountpoint
...01>
>>> <gfid:c723e484-63fc-4267-b3f0-4090194370a0>
>>> <gfid:fb1339a8-803f-4e29-b0dc-244e6c4427ed>
>>> <gfid:056f3bba-6324-4cd8-b08d-bdf0fca44104>
>>> <gfid:a8f6d7e5-0ff2-4747-89f3-87592597adda>
>>> <gfid:3f6438a0-2712-4a09-9bff-d5a3027362b4>
>>> <gfid:392c8e2f-9da4-4af8-a387-bfdfea2f404e>
>>> <gfid:37e1edfd-9f58-4da3-8abe-819670c70906>
>>> <gfid:15b7cdb3-aae8-4ca5-b28c-e87a3e599c9b>
>>> <gfid:1d087e51-fb40-4606-8bb5-58936fb11a4c>
>>> <gfid:bb0352b9...
2017 Jul 07
0
I/O error for one folder within the mountpoint
On 07/07/2017 01:23 PM, Florian Leleu wrote:
>
> Hello everyone,
>
> first time on the ML so excuse me if I'm not following well the rules,
> I'll improve if I get comments.
>
> We got one volume "applicatif" on three nodes (2 and 1 arbiter), each
> following command was made on node ipvr8.xxx:
>
> # gluster volume info applicatif
>
> Volume
2017 Jul 07
2
I/O error for one folder within the mountpoint
Hello everyone,
first time on the ML so excuse me if I'm not following well the rules,
I'll improve if I get comments.
We got one volume "applicatif" on three nodes (2 and 1 arbiter), each
following command was made on node ipvr8.xxx:
# gluster volume info applicatif
Volume Name: applicatif
Type: Replicate
Volume ID: ac222863-9210-4354-9636-2c822b332504
Status: Started