Displaying 9 results from an estimated 9 matches for "a64f".
Did you mean:
a64
2012 May 23
2
Bug#674088: xcp-xapi: vbd-plug to dom0 does not creates /dev/xvd* devices in dom0
...; XCP) is possible to attach VDI to dom0.
That operation usually looks like:
xe vbd-create vdi-uuid=... vm-uuid=(dom0 uuid) device=N
xe vbd-plug
I done those steps in xcp-xapi and got success (no error), but no xvd* device found.
Here operations log:
# xe vbd-create vdi-uuid=c95af56f-799f-49ad-a64f-82eca3299b50 vm-uuid=d859ed1a-760f-9928-b5be-f0ab1790b15f type=Disk mode=RW device=2
b2e8f7b3-34c4-fe9a-486e-6551b8ba4165
# xe vbd-plug uuid=b2e8f7b3-34c4-fe9a-486e-6551b8ba4165
# ls /dev/xv*
ls: cannot access /dev/xv*: No such file or directory
Data from different logs:
SMlog:
[12866] 2012-05...
2018 Feb 27
2
Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)
...t and run [self.domain.updateDeviceFlags(xml,
libvirt.VIR_DOMAIN_AFFECT_LIVE)] on CDROM (I want to change media path).
However, I enable libvirt debug log , the log as below:
"2018-02-26 13:09:13.638+0000: 50524: debug : virDomainLookupByName:412 :
conn=0x7f7278000aa0, name=6ec499397d594ef2a64fcfc938f38225
2018-02-26 13:09:13.638+0000: 50515: debug : virDomainGetInfo:2431 :
dom=0x7f726c000c30, (VM: name=6ec499397d594ef2a64fcfc938f38225,
uuid=6ec49939-7d59-4ef2-a64f-cfc938f38225), info=0x7f72b9059b20
2018-02-26 13:09:13.638+0000: 50515: debug : qemuGetProcessInfo:1479 : Got
status for 71...
2018 Aug 21
3
Samba 4.8.4 + BIND 9.9.4 - possibility of nonsecure DNS updates
...ecking 0 100 389 dc02x.samdom.svmetal.cz. against SRV _ldap._tcp.dc._msdcs.samdom.svmetal.cz dc03x.samdom.svmetal.cz 389
Checking 0 100 389 dc03x.samdom.svmetal.cz. against SRV _ldap._tcp.dc._msdcs.samdom.svmetal.cz dc03x.samdom.svmetal.cz 389
Looking for DNS entry SRV _ldap._tcp.991e4476-399a-4712-a64f-a2019ed40e7b.domains._msdcs.samdom.svmetal.cz dc03x.samdom.svmetal.cz 389 as _ldap._tcp.991e4476-399a-4712-a64f-a2019ed40e7b.domains._msdcs.samdom.svmetal.cz.
Checking 0 100 389 dc01.samdom.svmetal.cz. against SRV _ldap._tcp.991e4476-399a-4712-a64f-a2019ed40e7b.domains._msdcs.samdom.svmetal.cz dc03...
2018 Aug 21
0
Samba 4.8.4 + BIND 9.9.4 - possibility of nonsecure DNS updates
...m.svmetal.cz. against SRV
> _ldap._tcp.dc._msdcs.samdom.svmetal.cz dc03x.samdom.svmetal.cz 389
> Checking 0 100 389 dc03x.samdom.svmetal.cz. against SRV
> _ldap._tcp.dc._msdcs.samdom.svmetal.cz dc03x.samdom.svmetal.cz 389
> Looking for DNS entry SRV
> _ldap._tcp.991e4476-399a-4712-a64f-a2019ed40e7b.domains._msdcs
.samdom.svmetal.cz dc03x.samdom.svmetal.cz 389 as _ldap.> _tcp.991e4476-399a-4712-a64f-a2019ed40e7b.domains._msdcs.samdo
m.svmetal.cz.
> Checking 0 100 389 dc01.samdom.svmetal.cz. against SRV
> _ldap._tcp.991e4476-399a-4712-a64f-a2019ed40e7b.domains._msdcs
.sam...
2018 Aug 22
1
Samba 4.8.4 + BIND 9.9.4 - possibility of nonsecure DNS updates
...cs.samdom.svmetal.cz dc03x.samdom.svmetal.cz
force update: A samdom.svmetal.cz 192.168.45.1
force update: SRV _ldap._tcp.samdom.svmetal.cz dc03x.samdom.svmetal.cz 389
force update: SRV _ldap._tcp.dc._msdcs.samdom.svmetal.cz dc03x.samdom.svmetal.cz 389
force update: SRV _ldap._tcp.991e4476-399a-4712-a64f-a2019ed40e7b.domains._msdcs.samdom.svmetal.cz dc03x.samdom.svmetal.cz 389
force update: SRV _kerberos._tcp.samdom.svmetal.cz dc03x.samdom.svmetal.cz 88
force update: SRV _kerberos._udp.samdom.svmetal.cz dc03x.samdom.svmetal.cz 88
force update: SRV _kerberos._tcp.dc._msdcs.samdom.svmetal.cz dc03x.sa...
2018 Feb 27
1
Reply: Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)
...the local libvirt master branch follow your patch, and build rpm
for CentOS 7.4. virDomainUpdateDeviceFlags as bellow:
================================================
2018-02-27 09:27:43.782+0000: 16656: debug : virDomainUpdateDeviceFlags:8326
: dom=0x7f2084000c50, (VM: name=6ec499397d594e f2a64fcfc938f38225,
uuid=6ec49939-7d59-4ef2-a64f-cfc938f38225), xml=<disk device="cdrom"
type="network"><source name="zstac
k/08085a31f8c43f278ed2f649ee166b1f@08085a31f8c43f278ed2f649ee166b1f"
protocol="rbd"><host name="10.0.229.181" port=&q...
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...ommon.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 2cea210a-cd85-4ca8-a01e-5b026cbb8d98. sources=0 [2] sinks=1
[2017-10-25 10:40:17.786451] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 5cd1064d-a6c6-4e82-a64f-fef92542b537. sources=0 [2] sinks=1
[2017-10-25 10:40:17.796601] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on c30af9e9-72b0-4b30-8ecd-b589e39453a3. sources=0 [2] sinks=1
[2017-10-25 10:40:17.806350] I [MSGID: 108026] [afr-self-h...