Hi Team I'm facing following error while running libvirt on ppc platfrom with cpu model as e5500.. 2015-07-15 06:30:37.307+0000: 3976: warning : virQEMUCapsInit:1001 : Failed to get host CPU 2015-07-15 06:30:37.642+0000: 3976: error : virFirewallApply:936 : out of memory 2015-07-15 06:31:16.451+0000: 3969: error : cpuNodeData:344 : this function is not supported by the connection driver: cannot get node CPU data for ppc architecture Below is the output of important files.. cat /proc/cpuinfo processor : 0 cpu : e5500 clock : 1400.000000MHz revision : 2.1 (pvr 8024 1021) bogomips : 75.00 cat /usr/share/libvirt/cpu_map.xml <model name='POWERPC_e5500'> <vendor name='Freescale'/> <pvr value='0x80240000'/> </model> <model name='POWERPC_e6500'> <vendor name='Freescale'/> <pvr value='0x80400000'/> </model> </arch> </cpus> Thanks Abhishek Jain
Andrea Bolognani
2015-Jul-22 07:53 UTC
Re: [libvirt-users] libvirtd error missing cpu model
On Tue, 2015-07-21 at 15:12 +0530, abhishek jain wrote:> Hi Team > > I'm facing following error while running libvirt on ppc platfrom with > cpu model as e5500.. > > 2015-07-15 06:30:37.307+0000: 3976: warning : virQEMUCapsInit:1001 : > Failed to get host CPU > 2015-07-15 06:30:37.642+0000: 3976: error : virFirewallApply:936 : > out of memory > 2015-07-15 06:31:16.451+0000: 3969: error : cpuNodeData:344 : this > function is not supported by the connection driver: cannot get node > CPU data for ppc architecture > > > > Below is the output of important files.. > > cat /proc/cpuinfo > processor : 0 > cpu : e5500 > clock : 1400.000000MHz > revision : 2.1 (pvr 8024 1021) > bogomips : 75.00 > > > cat /usr/share/libvirt/cpu_map.xml > > <model name='POWERPC_e5500'> > <vendor name='Freescale'/> > <pvr value='0x80240000'/> > </model> > > <model name='POWERPC_e6500'> > <vendor name='Freescale'/> > <pvr value='0x80400000'/> > </model> > > </arch> > </cpus>Hi, it would be nice if you could provide more information, like * domain XML * output of 'uname -a' * libvirt version AFAIK the e5500 CPU is ppc64, not ppc, and there are no ppc CPUs defined in cpu_map.xml. Cheers. -- Andrea Bolognani Software Engineer - Virtualization Team
Hi Andrea Thanks for the reply. Below is some more information regarding the same.. uname -a Linux t1040rdb 3.12.37-rt51-QorIQ-SDK-V1.8+gf488de6 #2 SMP Mon Jul 20 14:43:02 IST 2015 ppc GNU/Linux libvirtd --version libvirtd (libvirt) 1.2.13 cat /usr/share/libvirt/cpu_map.xml </arch> <arch name='ppc64'> <model name='POWERPC_e5500'> <vendor name='Freescale'/> <pvr value='0x80240000'/> </model> <model name='POWERPC_e6500'> <vendor name='Freescale'/> <pvr value='0x80400000'/> </model> </arch> </cpus> On Wed, Jul 22, 2015 at 1:23 PM, Andrea Bolognani <abologna@redhat.com> wrote:> On Tue, 2015-07-21 at 15:12 +0530, abhishek jain wrote: > > Hi Team > > > > I'm facing following error while running libvirt on ppc platfrom with > > cpu model as e5500.. > > > > 2015-07-15 06:30:37.307+0000: 3976: warning : virQEMUCapsInit:1001 : > > Failed to get host CPU > > 2015-07-15 06:30:37.642+0000: 3976: error : virFirewallApply:936 : > > out of memory > > 2015-07-15 06:31:16.451+0000: 3969: error : cpuNodeData:344 : this > > function is not supported by the connection driver: cannot get node > > CPU data for ppc architecture > > > > > > > > Below is the output of important files.. > > > > cat /proc/cpuinfo > > processor : 0 > > cpu : e5500 > > clock : 1400.000000MHz > > revision : 2.1 (pvr 8024 1021) > > bogomips : 75.00 > > > > > > cat /usr/share/libvirt/cpu_map.xml > > > > <model name='POWERPC_e5500'> > > <vendor name='Freescale'/> > > <pvr value='0x80240000'/> > > </model> > > > > <model name='POWERPC_e6500'> > > <vendor name='Freescale'/> > > <pvr value='0x80400000'/> > > </model> > > > > </arch> > > </cpus> > > Hi, > > it would be nice if you could provide more information, > like > > * domain XML > * output of 'uname -a' > * libvirt version > > AFAIK the e5500 CPU is ppc64, not ppc, and there are no > ppc CPUs defined in cpu_map.xml. > > Cheers. > > -- > Andrea Bolognani > Software Engineer - Virtualization Team >