similar to: Xen paravirt_ops tree for testing

Displaying 20 results from an estimated 10000 matches similar to: "Xen paravirt_ops tree for testing"

2008 Jul 31
0
[Xen-devel] State of Xen in upstream Linux
----- Forwarded message from Jeremy Fitzhardinge <jeremy at goop.org> ----- From: Jeremy Fitzhardinge <jeremy at goop.org> To: Xen-devel <xen-devel at lists.xensource.com>, xen-users at lists.xensource.com, Virtualization Mailing List <virtualization at lists.osdl.org> Cc: Date: Wed, 30 Jul 2008 17:51:37 -0700 Subject: [Xen-devel] State of Xen in upstream Linux Well,
2008 Jul 31
6
State of Xen in upstream Linux
Well, the mainline kernel just hit 2.6.27-rc1, so it's time for an update about what's new with Xen. I'm trying to aim this at both the user and developer audiences, so bear with me if I seem to be waffling about something irrelevant. 2.6.26 was mostly a bugfix update compared with 2.6.25, with a few small issues fixed up. Feature-wise, it supports 32-bit domU with the core devices
2008 Jul 31
6
State of Xen in upstream Linux
Well, the mainline kernel just hit 2.6.27-rc1, so it's time for an update about what's new with Xen. I'm trying to aim this at both the user and developer audiences, so bear with me if I seem to be waffling about something irrelevant. 2.6.26 was mostly a bugfix update compared with 2.6.25, with a few small issues fixed up. Feature-wise, it supports 32-bit domU with the core devices
2008 Jul 31
6
State of Xen in upstream Linux
Well, the mainline kernel just hit 2.6.27-rc1, so it's time for an update about what's new with Xen. I'm trying to aim this at both the user and developer audiences, so bear with me if I seem to be waffling about something irrelevant. 2.6.26 was mostly a bugfix update compared with 2.6.25, with a few small issues fixed up. Feature-wise, it supports 32-bit domU with the core devices
2008 Jul 31
6
State of Xen in upstream Linux
Well, the mainline kernel just hit 2.6.27-rc1, so it's time for an update about what's new with Xen. I'm trying to aim this at both the user and developer audiences, so bear with me if I seem to be waffling about something irrelevant. 2.6.26 was mostly a bugfix update compared with 2.6.25, with a few small issues fixed up. Feature-wise, it supports 32-bit domU with the core devices
2008 Feb 27
1
xen: Make hvc0 the preferred console in domU
This makes the Xen console just work. Before, you had to ask for it on the kernel command line with console=hvc0 Signed-off-by: Markus Armbruster <armbru at redhat.com> --- diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c index 49e5358..df63185 100644 --- a/arch/x86/xen/enlighten.c +++ b/arch/x86/xen/enlighten.c @@ -25,6 +25,7 @@ #include <linux/mm.h> #include
2007 May 21
2
changing definition of paravirt_ops.iret
I'm implementing a more efficient version of the Xen iret paravirt_op, so that it can use the real iret instruction where possible. I really need to get access to per-cpu variables, so I can set the event mask state in the vcpu_info structure, but unfortunately at the point where INTERRUPT_RETURN is used in entry.S, the usermode %fs has already been restored. How would you feel if we changed
2007 May 21
2
changing definition of paravirt_ops.iret
I'm implementing a more efficient version of the Xen iret paravirt_op, so that it can use the real iret instruction where possible. I really need to get access to per-cpu variables, so I can set the event mask state in the vcpu_info structure, but unfortunately at the point where INTERRUPT_RETURN is used in entry.S, the usermode %fs has already been restored. How would you feel if we changed
2007 Apr 18
2
MMU operations in paravirt_ops
Hi all, The next obvious step for paravirt_ops seems to me to be higher-level mmu operations: from reading the VMI patches it seems to do flushing, whereas Xen opts for batching. In the spirit of ops structures, this would be done by putting higher-level operations into the ops structure, and batching done by the op itself (perhaps with a default implementation for those too lazy to implement
2007 Apr 18
2
MMU operations in paravirt_ops
Hi all, The next obvious step for paravirt_ops seems to me to be higher-level mmu operations: from reading the VMI patches it seems to do flushing, whereas Xen opts for batching. In the spirit of ops structures, this would be done by putting higher-level operations into the ops structure, and batching done by the op itself (perhaps with a default implementation for those too lazy to implement
2015 Feb 11
1
[PATCH] virtual: Documentation: simplify and generalize paravirt_ops.txt
From: "Luis R. Rodriguez" <mcgrof at suse.com> The general documentation we have for pv_ops is currenty present on the IA64 docs, but since this documentation covers IA64 xen enablement and IA64 Xen support got ripped out a while ago through commit d52eefb47 present since v3.14-rc1 lets just simplify, generalize and move the pv_ops documentation to a shared place. Cc: Isaku
2015 Feb 11
1
[PATCH] virtual: Documentation: simplify and generalize paravirt_ops.txt
From: "Luis R. Rodriguez" <mcgrof at suse.com> The general documentation we have for pv_ops is currenty present on the IA64 docs, but since this documentation covers IA64 xen enablement and IA64 Xen support got ripped out a while ago through commit d52eefb47 present since v3.14-rc1 lets just simplify, generalize and move the pv_ops documentation to a shared place. Cc: Isaku
2007 Apr 18
1
paravirt_ops.safe_halt vs .halt?
What's paravirt_ops.halt for? Is it the non-safe equivalent to safe_halt, or is it intended for shutting down the machine? It doesn't seem to be used anywhere. J
2007 Apr 18
1
paravirt_ops.safe_halt vs .halt?
What's paravirt_ops.halt for? Is it the non-safe equivalent to safe_halt, or is it intended for shutting down the machine? It doesn't seem to be used anywhere. J
2015 Oct 02
1
hvc - hypervisor virtual console & virsh
Hi Fi, cmdline: console=hvc0 systemd.wants=getty at hvc0 Connect via virt-manager - View / Text Consoles / Text Console no.: Fedora 24 (Rawhide) Kernel 4.3.0-0.rc3.git2.4.fc24.x86_64 on an x86_64 (hvc0) localhost login: root Last login: Fri Oct 2 17:28:36 on hvc0 # tty /dev/hvc0 # systemctl status getty at hvc0.service ? getty at hvc0.service - Getty on hvc0 Loaded: loaded
2007 Apr 18
1
[PATCH] (with benchmarks) binary patching of paravirt_ops call sites
Hi all, Sorry for the delay. This implements binary patching of call sites for interrupt-related paravirt ops, since no-doubt Andi wasn't the only one to believe this approach is slow. The benchmarks were done on a UP 3GHz Pentium 4 with 512MB of RAM. 2.6.17-rc4 vs 2.6.17-rc4 with CONFIG_PARAVIRT=y vs 2.6.17-rc4 CONFIG_PARAVIRT=y with patch. Summary: with binary patching, the difference
2007 Apr 18
1
[PATCH] (with benchmarks) binary patching of paravirt_ops call sites
Hi all, Sorry for the delay. This implements binary patching of call sites for interrupt-related paravirt ops, since no-doubt Andi wasn't the only one to believe this approach is slow. The benchmarks were done on a UP 3GHz Pentium 4 with 512MB of RAM. 2.6.17-rc4 vs 2.6.17-rc4 with CONFIG_PARAVIRT=y vs 2.6.17-rc4 CONFIG_PARAVIRT=y with patch. Summary: with binary patching, the difference
2007 Apr 18
2
[PATCH/RFC] replace get_scheduled_cycles with sched_clock paravirt_op
Subject: Add a sched_clock paravirt_op The tsc-based get_scheduled_cycles interface is not a good match for Xen's runstate accounting, which reports everything in nanoseconds. This patch replaces this interface with a sched_clock interface, which matches both Xen and VMI's requirements. In order to do this, we: 1. replace get_scheduled_cycles with sched_clock 2. hoist cycles_2_ns
2007 Apr 18
2
[PATCH/RFC] replace get_scheduled_cycles with sched_clock paravirt_op
Subject: Add a sched_clock paravirt_op The tsc-based get_scheduled_cycles interface is not a good match for Xen's runstate accounting, which reports everything in nanoseconds. This patch replaces this interface with a sched_clock interface, which matches both Xen and VMI's requirements. In order to do this, we: 1. replace get_scheduled_cycles with sched_clock 2. hoist cycles_2_ns
2013 Apr 11
4
How to determine why a server is not responding
Hi to all! We're using CentOS 5.5 64bits for our Plesk 11. This week we had the following problem 3 times... Suddenly, the server stops responding in all services (SSH, Apache, Postfix, ...) but ping works! After wait a few minutes (or 2 hours some times) the server continues unresponsive until we reboot. After reboot we search on /var/log/messages but cannot find useful information...