search for: timer_mod

Displaying 20 results from an estimated 94 matches for "timer_mod".

Did you mean: timer_mode
2008 May 14
3
Bug#481200: xend: Handle unspecified timer_mode domain platform parameter.
...2.0.old/tools/python/xen/xend/XendDomainInfo.py 2008-01-18 15:31:10.000000000 -0200 +++ xen-3-3.2.0/tools/python/xen/xend/XendDomainInfo.py 2008-05-14 10:22:15.000000000 -0300 @@ -1650,9 +1650,10 @@ self._recreateDom() # Set timer configration of domain - if hvm: + timer_mode = self.info["platform"].get("timer_mode") + if hvm and timer_mode is not None: xc.hvm_set_param(self.domid, HVM_PARAM_TIMER_MODE, - long(self.info["platform"].get("timer_mode"))) + long(timer_mod...
2008 Jun 03
1
change hvm defaults for timer_mode and hpet?
Due to recent changes in timer handling (specifically building hpet emulation on top of Xen system time and ensuring it is monotonic), I wonder if it now makes sense to: 1) change hvm default for hpet to 1 (was 0) 2) change hvm timer_mode default from 0 to 2 I encouraged adding the hvm hpet parameter and defaulting it to 0 because the virtual hpet was not reliable and many guests/versions default to using hpet (by default) it if it is available. That reliability problem should now be fixed. Timer_mode==0 is necessary for guests t...
2012 Feb 17
3
Re: Xen domU Timekeeping (a.k.a TSC/HPET issues)
...I can''t find any further information about this >> issue. What is the state of this issue? The inconsistency I see right >> now is this: in the July 2010 TSC discussion, a "Stefano Stabellini" >> posted this: >> >> ==== >> > /me wonders if timer_mode=1 is the default for xl? >> > Or only for xm? >> >> no, it is not. >> Xl defaults to 0 [zero], I am going to change it right now. >> ==== >> >> So, it seems like (at least as of July 2010), xl is defaulting to >> "timer_mode=1". That is...
2015 Nov 21
1
[PATCH -qemu] nvme: support Google vendor extension
...But, I have a possible culprit. In your nvme_cq_notifier you are not doing the equivalent of: start_sqs = nvme_cq_full(cq) ? 1 : 0; cq->head = new_head; if (start_sqs) { NvmeSQueue *sq; QTAILQ_FOREACH(sq, &cq->sq_list, entry) { timer_mod(sq->timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + 500); } timer_mod(cq->timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + 500); } Instead, you are just calling nvme_post_cqes, which is the equivalent of timer_mod(cq->timer, qemu_clock_get_ns(QEMU_CLOCK_VIRT...
2008 Dec 29
1
Guest time and TSCs since changeset 17716
...m_get_guest_time and hvm_set_guest_time were changed to use this. Previously, the guest time was stored directly in the TSC offset fields of the vmx/smv control structures. Since pt_freeze_time and pt_thaw_time use hvm_get/set_guest_time, they now no longer freeze TSC time for a guest. So, for timer_mode 0, TSC time is no longer frozen when a VCPU is not running. Unless you''re using opt_softtsc, in which TSC exactly tracks the per-domain values. I have no love for timer_mode 0 (it has serious issues on SMP), but was this change intentional? Or am I perhaps missing something? - Frank...
2007 Dec 19
23
3.1.x and 3.2.x releases
Folks, A new release candidate for 3.2.0 has just been checked into the xen-unstable tree. It''s available from staging and will be in the main tree when it has passed internal regression tests. Meanwhile, in preparation for 3.1.3, please let me know if there are any further patches from xen-unstable that should be backported into the 3.1 branch. You can pull the xen-3.1-testing.hg
2012 Feb 07
7
GPLPV, RDP and network latency
...problem. We have tried different Windows 7 distros, 32- and 64-bit editions, Windows and Linux RDP clients, with the same result. Software used: Xen version 4.1.2_05-1.1.1 (abuild@) (gcc version 4.6.2 (SUSE Linux) ) Sun Oct 30 03:25:04 UTC 2011 gplpv_Vista2008x32_signed_0.11.0.308 Windows 7 SP1 timer_mode is set to 1, offload settings are: Offload parameters for vif2.0: rx-checksumming: off tx-checksumming: off scatter-gather: off tcp-segmentation-offload: off udp-fragmentation-offload: off generic-segmentation-offload: off generic-receive-offload: off large-receive-offload: off rx-vlan-offload: of...
2009 May 18
2
W2K3 HVM with gplpv shows strange pings
I just migrated a W2K3 Server to a Xen HVM domain, using the gplpv 0.10.69 drivers. Now ping shows strange results: Pinging 127.0.0.1 with 32 bytes of data: Reply from 127.0.0.1: bytes=32 time=23538ms TTL=128 Reply from 127.0.0.1: bytes=32 time<1ms TTL=128 Reply from 127.0.0.1: bytes=32 time=-23538ms TTL=128 Reply from 127.0.0.1: bytes=32 time<1ms TTL=128 Same result with external
2013 Jan 18
6
[PATCH v1 01/02] HVM firmware passthrough libxl support
This patch introduces support for two new parameters in libxl: smbios_firmware=<path_to_smbios_structures_file> acpi_firmware=<path_to_acpi_tables_file> The changes are primarily in the domain building code where the firmware files are read and passed to libxc for loading into the new guest. After the domain building call to libxc, the addresses for the loaded blobs are returned and
2013 Jan 17
4
[PATCH v4] tools/libxl: Improve videoram setting
2010 May 28
4
Anyone able to NFS boot on xen 4.x ?
Has anyone been able to get domU NFS boots working with any version of Xen 4.x? If so, can you please post your config? Both the dom0 Xen version & kernel, as well as the domU config file. I''ve spent a lot of time running through all the docs, HOWTO''s and published configs and assorted patches for Ubuntu 9x-10x and Xen 4x trying NFS booting with each. I''m not going
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
On 20/11/2015 09:11, Ming Lin wrote: > On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote: >> >> On 18/11/2015 06:47, Ming Lin wrote: >>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val) >>> } >>> >>> start_sqs = nvme_cq_full(cq) ? 1 : 0; >>> - cq->head = new_head;
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
On 20/11/2015 09:11, Ming Lin wrote: > On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote: >> >> On 18/11/2015 06:47, Ming Lin wrote: >>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val) >>> } >>> >>> start_sqs = nvme_cq_full(cq) ? 1 : 0; >>> - cq->head = new_head;
2010 Jun 08
32
Problems with GPLPV network latency
Hi, DomU is a Win2008 R2 64 When i install the GPLPV drivers the network latency goes from 15ms to random numbers up to 1200ms and eventually dies. If you run a ping from the DomU to another host the network stays alive but the high latency is still there. Further more if i try and uninstall the network driver i am unable to use the old one (realtek) as it cannot detect the device.
2008 Aug 22
3
Problem with Broadcom Corporation NetXtreme II BCM5708 bnx2
Hi, I have a Dell PowerEdge 1950 with two NICs Broadcom NetXtreme II BCM5708 1000Base-T. I installed CentOS 5.1 and Xen 3.0.3 (RPM). One of my virtual machines has Windows 2003 Server. In this virtual machine my NICs appears like "Realtek RTL8139 Family PCI Fast Ethernet NIC". The problem is that when I ping to other machines sometimes the reply time value is very high: C:>
2013 Sep 18
1
[PATCH] Allow 4 MB of video RAM for Cirrus graphics on traditional QEMU
...b_info->video_memkb = 8 * 1024; - else if (b_info->video_memkb < (8 * 1024) ){ - LOG(ERROR,"videoram must be at least 8 mb"); - return ERROR_INVAL; + break; + } + break; } if (b_info->u.hvm.timer_mode == LIBXL_TIMER_MODE_DEFAULT) @@ -251,8 +286,6 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc, if (!b_info->u.hvm.boot) return ERROR_NOMEM; } - if (!b_info->u.hvm.vga.kind) - b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_CIRRUS;...
2013 Feb 05
21
[PATCH] x86/hvm: fix corrupt ACPI PM-Timer during live migration
The value of ACPI PM-Timer may be broken on save unless the timer mode is delay_for_missed_ticks. With other timer modes, vcpu->arch.hvm_vcpu.guest_time is always zero and the adjustment from its value is wrong. This patch fixes the saved value of ACPI PM-Timer: - don''t adjust the PM-Timer if vcpu->arch.hvm_vcpu.guest_time is zero. - consolidate calculations of PM-Timer to one
2011 Dec 16
13
[PATCH 0 of 4] Support for VM generation ID save/restore and migrate
This patch series adds support for preservation of the VM generation ID buffer address in xenstore across save/restore and migrate, and also code to increment the value in all cases except for migration. Patch 1 modifies the guest ro and rw node creation to an open coding style and cleans up some extraneous node creation. Patch 2 modifies creation of the hvmloader key in xenstore and adds
2010 Jul 07
4
Windows 7 x64 + GPLPV: STOP 0x00000101
When trying to use the GPL PV drivers with Windows 7 x64 (and testsigning turned on), I get a BSOD with a STOP 0x00000101 error. There''s also an error at the top that says "A clock interrupt was not received on a secondary processor within the allocated time interval." Is this STOP error related to the HPET or other timing adjustments for Xen? Any advice on things I should
2008 Sep 29
3
LVM related bug in the GPL PV drivers for Windows?
...nfig file of the domU: name="vw-storman2" uuid="7b77d277-e99a-e25b-3652-61e936f78abb" memory=1024 vcpus=1 on_poweroff="destroy" on_reboot="restart" on_crash="destroy" on_xend_start = "ignore" on_xend_stop = "shutdown" localtime=1 timer_mode=1 builder="hvm" device_model="/usr/lib/xen/bin/qemu-dm" kernel="/usr/lib/xen/boot/hvmloader" boot="c" disk=[ ''phy:/dev/mapper/vw-storman2,hda,w'' ] vif=[ ''mac=00:16:3e:0b:e9:5a,bridge=eth0'', ] keymap="hu" stdvga=0...