search for: qeum

Displaying 14 results from an estimated 14 matches for "qeum".

Did you mean: qemu
2019 Oct 12
0
qeum on centos 8 with nvme disk
I have CentOS 8 install solely on one nvme drive and it works fine and relatively quickly. /dev/nvme0n1p4????????? 218G?? 50G? 168G? 23% / /dev/nvme0n1p2????????? 2.0G? 235M? 1.6G? 13% /boot /dev/nvme0n1p1????????? 200M? 6.8M? 194M?? 4% /boot/efi You might want to partition the device (p3 is swap) Alan On 13/10/2019 10:38, Jerry Geis wrote: > Hi All - I use qemu on my centOS 7.7 box that
2019 Oct 12
1
qeum on centos 8 with nvme disk
Hi Alan, Yes I have partitioned similar - with a swap. but as I mentioned slow! What command line do you use ? Device Boot Start End Blocks Id System /dev/nvme0n1p1 2048 102402047 51200000 83 Linux /dev/nvme0n1p2 102402048 110594047 4096000 82 Linux swap / Solaris /dev/nvme0n1p3 110594048 112642047 1024000 6 FAT16
2019 Oct 12
0
qeum on centos 8 with nvme disk
> How do you measure the slowness? Use fio or bonnie++ to share some number. By it taking more than 6 hours to "install" CentOS 8 in the guest :) Jerry
2019 Oct 14
0
qeum on centos 8 with nvme disk
> so you can try like: virt-install -n NAME -r mem --vcpus=N --accelerate >--os-type=X --os-variant=X --disk path=/dev/nvme0n1[pN] ...and so on. Is there a command for virt-manager stuff that is just like qemu? Just command line - I dont want the GUI popping up and all that stuff. I dont need it creating all other files - just a simple command line ? I have not found that yet with my
2019 Oct 14
1
qeum on centos 8 with nvme disk
>virt-install can be run with no GUI. You can set it up to >automatically start a serial console in case you need to interact with >the install. You can also use 'virsh' to edit VM configs from the >command line. Sure - I saw those - but I was looking for something just like the old qemu command line. Just boot up and run - Nothing added to a GUI interface. Nothing that I
2019 Oct 13
5
qeum on centos 8 with nvme disk
>6 hours are too much. First of all you need to check your nvme >performace (dd can help? dd if=/dev/zero of=/test bs=1M count=10000 andd >see results. If you want results more benchmark oriented you could try >bonnie++ as suggested by Jerry). >Other this, have you got kvm module loaded and enabled cpu >virtualization option in the BIOS? >If yes, have you got created the VM
2019 Oct 12
7
qeum on centos 8 with nvme disk
Hi All - I use qemu on my centOS 7.7 box that has software raid of 2- SSD disks. I installed an nVME drive in the computer also. I tried to insall CentOS8 on it (the physical /dev/nvme0n1 with the -hda /dev/nvme0n1 as the disk. The process started installing but is really "slow" - I was expecting with the nvme device it would be much quicker. Is there something I am missing how to
2009 Aug 21
2
how to clear <defunct> qemu-system-x86 processes
Hi all, I have a long term running program which call libguestfs to check VM images. but i found that each time after i call guestfs_close(), the qeum-system-x86 process which is forked by libguestfs changes to <defunct> process. as docs of guestfs_close said "This closes the connection handle and frees up all resources used.", so this is a bug or is there any other way to clear them. thanks a lot! ------------------------ C...
2013 Feb 24
1
xen hvm fails to start
When I try to start hvm qemu-xen I get this error It seems that the path my be off. Can someone point me were in source code path to qemu pid is defined .It seems that qeum-xen points to loacation I don''t see were it is defined. xc: detail: elf_load_binary: phdr 0 at 0x0x7f71cac72000 -> 0x0x7f71cad079d5 libxl: error: libxl_dom.c:612:libxl__build_hvm: hvm building failed libxl: error: libxl_create.c:885:domcreate_rebuild_done: cannot (re-)build domain: -3 l...
2013 Apr 17
1
question about process power which has MCSx
...; error_policy='stop'), make some modification on its files and save them. then go to hypervisor, modify the MCS of guestVM's image file. 1.i can read those files(cache=none)?it should not be so. why? 2.then modify files and save, the guestVM hang, it is paused on UI. this is right qeum process can not write again. why this guestVM is hang? and can not be resumed 3.look at audit info. denied { write } for pid=52162 comm="qemu-kvm". that pid is 52162, is not my qemu-kvm's pid? why? thanks so much. -------------- next part -------------- An HTML attachment wa...
2013 Jun 20
0
Installation problem
Hi, I had install glusterfs on my ubuntu 13.04 now I want to configure qeume with --enable-glusterfs but its not working, here is the error ERROR ERROR: User requested feature GlusterFS backend support ERROR: configure was not able to find it ERROR Would you please help which gluster package need to install so qemu find the glusterfs. Br. Umar -------------- next part...
2014 May 01
0
Some odd quriks of libvirt and xen on fedora 20.
for pv systems the vnc console display is on the correct port with the following call do qeum-dm libxl: debug: libxl_dm.c:1213:libxl__spawn_local_dm: -domain-name libxl: debug: libxl_dm.c:1213:libxl__spawn_local_dm: paravirt libxl: debug: libxl_dm.c:1213:libxl__spawn_local_dm: -vnc libxl: debug: libxl_dm.c:1213:libxl__spawn_local_dm: 127.0.0.1:1 but for hv systems xl is being pa...
2012 Jan 27
3
Cannot remove lvs associated with deleted vm guests
At the beginning of January I encountered a problem where several vm guests on a single host somehow managed to see the the virtual disks assigned to other guests on the same hosts. I was unable to resolve this situation and shutdown the affected guests after creating new guest instances and moving the services and data off the corrupted guests. I have since removed these guests via virt-manager
2012 Jan 27
3
Cannot remove lvs associated with deleted vm guests
At the beginning of January I encountered a problem where several vm guests on a single host somehow managed to see the the virtual disks assigned to other guests on the same hosts. I was unable to resolve this situation and shutdown the affected guests after creating new guest instances and moving the services and data off the corrupted guests. I have since removed these guests via virt-manager