Displaying 5 results from an estimated 5 matches for "9afe".
Did you mean:
9af3
2008 Feb 12
4
xVM and VirtualBox
...744
VirtualBox starts up but can not start a VirtualMachine because the
vboxdrv driver does not work:
|VirtualBox kernel driver not installed.
VBox status code: -1908 (VERR_VM_DRIVER_NOT_INSTALLED).
Result Code: 0x80004005
Component: Console
Interface: IConsole {d5a1cbda-f5d7-4824-9afe-d640c94c7dcf}|
I really hope that, now that Innotek was bought by Sun, it will be
possible in a future version to use VirtualBox also in a xVM Dom0
Bernd
--
Bernd Schemmer, Frankfurt am Main, Germany
http://home.arcor.de/bnsmb/index.html
M s temprano que tarde el mundo cambiar ....
2015 Nov 24
2
libvirtd doesn't attach Sheepdog storage VDI disk correctly
...====
....
update sheepdog client] update sheepdog client path (Vasiliy Tolstov),
.....
=====================================================
Nevertheless it doesn't work in Debian 8 testing (libvirt 1.2.21).
--
Best regards
Adolf Augustin
E-mail: adolf.augustin@zettamail.de
PGP-Key: 0xC4709AFE
Fingerprint: 1806 35FA CAE8 0202 B7AF 12B9 5956 5BC0 C470 9AFE
2015 Nov 30
1
Re: libvirtd doesn't attach Sheepdog storage VDI disk correctly
...========================
>>
>> Nevertheless it doesn't work in Debian 8 testing (libvirt 1.2.21).
>
>
> This commit does not related to you issue. Also how you define the volume ?
>
--
Best regards
Adolf Augustin
E-mail: adolf.augustin@zettamail.de
PGP-Key: 0xC4709AFE
Fingerprint: 1806 35FA CAE8 0202 B7AF 12B9 5956 5BC0 C470 9AFE
2015 Dec 09
0
virt-manager 1.3.1 - broken ?? (Ubuntu 14.04 )
...mespace Vte not available for version 2.91
=================================================================================
Any Suggestions ?
virt-manager Verions 1.1.x-1.3.0 worked just fine.
What happened ?
--
Best regards
Adolf Augustin
E-mail: adolf.augustin@zettamail.de
PGP-Key: 0xC4709AFE
Fingerprint: 1806 35FA CAE8 0202 B7AF 12B9 5956 5BC0 C470 9AFE
2017 Jul 25
0
recovering from a replace-brick gone wrong
...1077:afr_log_selfheal] 2-gv_cdn_001-replicate-1: Completed
metadata selfheal on 97ad48bc-8873-4700-9f82-4
7130cd031a1. sources=[0] sinks=
[2017-07-23 20:38:04.837770] I [MSGID: 108026] [afr-self-heal-
common.c:1077:afr_log_selfheal] 2-gv_cdn_001-replicate-1: Completed
metadata selfheal on 041e58e2-9afe-4f43-ba0b-d
11e80b5053b. sources=[0] sinks=
In order to "fix" the situation I had to kill off the new brick process
so now I'm in a situation with a distribute replica volume with one of
the replica sets in a degraded state.
Is there any way I can re-add the old brick back to get b...