search for: reentry

Displaying 20 results from an estimated 52 matches for "reentry".

2009 Aug 11
1
[PATCHv2 0/2] vhost: a kernel-level virtio server
This implements vhost: a kernel-level backend for virtio, The main motivation for this work is to reduce virtualization overhead for virtio by removing system calls on data path, without guest changes. For virtio-net, this removes up to 4 system calls per packet: vm exit for kick, reentry for kick, iothread wakeup for packet, interrupt injection for packet. Some more detailed description attached to the patch itself. The patches are against 2.6.31-rc4. I'd like them to go into linux-next and down the road 2.6.32 if possible. Please comment. Changes from v1: - Move use_mm/un...
2009 Aug 11
1
[PATCHv2 0/2] vhost: a kernel-level virtio server
This implements vhost: a kernel-level backend for virtio, The main motivation for this work is to reduce virtualization overhead for virtio by removing system calls on data path, without guest changes. For virtio-net, this removes up to 4 system calls per packet: vm exit for kick, reentry for kick, iothread wakeup for packet, interrupt injection for packet. Some more detailed description attached to the patch itself. The patches are against 2.6.31-rc4. I'd like them to go into linux-next and down the road 2.6.32 if possible. Please comment. Changes from v1: - Move use_mm/un...
2013 Aug 23
1
Setting Up LVS to Load Balance DNS
Greetings, all: OS: CentOS 6.4 x86_64 Kernel: 2.6.32-358.14.1 I could use some assistance with setting up pulse to load balance my dns servers. I've configured tcp and udp port 53 with the piranha gui, set up arptable rules on the real servers and added the virtual ip to the bond0 interface on the real servers, but I'm still having no luck in getting things going. A dig against the
2016 May 04
2
OrcLazyJIT for windows
...after some digging I found that the crash is caused by LocalJITCompileCallbackManager::reenter not getting the correct CompileCallback and trampolineid references. This in turn is being caused by OrcX86_64::writeResolverCode not respecting windows calling convention in the asm code for calling the reentry function. After making changes to the asm code in OrcX86_64::writeResolverCode, the code runs without any problems. I thought I share it here with the public so that others who would like to use orclazyjit on windows could benefit. Please let me know if a different channel would be more appropriat...
2020 Nov 16
2
ORC JIT Weekly #26 -- Orc library break-up, remote TargetProcessControl, and the beginnings of a runtime.
...ovide an introductory example for these APIs. The next step for Orc, required for both cross-process support and new features, is a runtime library. The runtime should be loadable via the JIT like any other static library, provide support for JIT re-entry (eventually replacing the existing Orc-ABI reentry code), thread locals, eh-frame and language runtime registration, execution of initializers, and more. I've started a prototype of a runtime as a library within compiler-rt in the orc-runtime-prototype branch of my llvm fork [3]. There's not much there yet, but I will keep you updated on my...
2020 Feb 04
0
Always Be Conferencing v16e - pure AEL-based dial plan solution
...------------------ ; * Needs two contexts. ; * Adds nine Dynamic Call Back numbers. ; * Adds local call back option with 922*EXTENSION ; * Requires call backs be registered before first use ; by dialing 123*2221*NUMBER [from-external-custom-abc-example-3] exten = 3035551111,1,AELSub(pngnpbx-abc-reentry,1,${EXTEN},${abclo}) ; Place the dynamic call back DIDs here, one per line (copy/paste/edit.) exten = 3035552222,1,AELSub(pngnpbx-abc-reentry,2,${EXTEN},${abclo}) ; Do not change anything except for the phone numbers and position number. exten = 3035553333,1,AELSub(pngnpbx-abc-reentry,3,${EXT...
2013 Jun 19
3
[PATCH] virtio-spec: add field for scsi command size
Il 19/06/2013 10:24, Michael S. Tsirkin ha scritto: >> > 2) We introduce VIRTIO_NET_F_ANY_LAYOUT and VIRTIO_BLK_F_ANY_LAYOUT >> > specifically for net and block (note the new names). So why not a transport feature? Is it just because the SCSI commands for virtio-blk also require a config space field? Sorry if I missed this upthread. Paolo >> > 3) I note the
2013 Jun 19
3
[PATCH] virtio-spec: add field for scsi command size
Il 19/06/2013 10:24, Michael S. Tsirkin ha scritto: >> > 2) We introduce VIRTIO_NET_F_ANY_LAYOUT and VIRTIO_BLK_F_ANY_LAYOUT >> > specifically for net and block (note the new names). So why not a transport feature? Is it just because the SCSI commands for virtio-blk also require a config space field? Sorry if I missed this upthread. Paolo >> > 3) I note the
2016 May 04
2
OrcLazyJIT for windows
...crash is caused by >> LocalJITCompileCallbackManager::reenter not getting the correct >> CompileCallback and trampolineid references. This in turn is being caused by >> OrcX86_64::writeResolverCode not respecting windows calling convention in >> the asm code for calling the reentry function. >> >> After making changes to the asm code in OrcX86_64::writeResolverCode, the >> code runs without any problems. I thought I share it here with the public >> so that others who would like to use orclazyjit on windows could benefit. >> Please let me know if...
2013 Jun 20
0
[PATCH] virtio-spec: add field for scsi command size
...-/* For development, we want to crash whenever the ring is screwed. */ -#define BAD_RING(_vq, fmt, args...) \ - do { \ - dev_err(&(_vq)->vq.vdev->dev, \ - "%s:"fmt, (_vq)->vq.name, ##args); \ - BUG(); \ - } while (0) -/* Caller is supposed to guarantee no reentry. */ -#define START_USE(_vq) \ - do { \ - if ((_vq)->in_use) \ - panic("%s:in_use = %i\n", \ - (_vq)->vq.name, (_vq)->in_use); \ - (_vq)->in_use = __LINE__; \ - } while (0) -#define END_USE(_vq) \ - do { BUG_ON(!(_vq)->in_use); (_vq)->in_use =...
2010 May 24
16
questions about zil
I recently got a new SSD (ocz vertex LE 50gb) It seems to work really well as a ZIL performance wise. My question is, how safe is it? I know it doesn''t have a supercap so lets'' say dataloss occurs....is it just dataloss or is it pool loss? also, does the fact that i have a UPS matter? the numbers i''m seeing are really nice....these are some nfs tar times before
2013 Jun 20
3
[PATCH] virtio-spec: add field for scsi command size
...ash whenever the ring is screwed. */ > -#define BAD_RING(_vq, fmt, args...) \ > - do { \ > - dev_err(&(_vq)->vq.vdev->dev, \ > - "%s:"fmt, (_vq)->vq.name, ##args); \ > - BUG(); \ > - } while (0) > -/* Caller is supposed to guarantee no reentry. */ > -#define START_USE(_vq) \ > - do { \ > - if ((_vq)->in_use) \ > - panic("%s:in_use = %i\n", \ > - (_vq)->vq.name, (_vq)->in_use); \ > - (_vq)->in_use = __LINE__; \ > - } while (0) > -#define END_USE(_vq) \ > - do {...
2013 Jun 20
3
[PATCH] virtio-spec: add field for scsi command size
...ash whenever the ring is screwed. */ > -#define BAD_RING(_vq, fmt, args...) \ > - do { \ > - dev_err(&(_vq)->vq.vdev->dev, \ > - "%s:"fmt, (_vq)->vq.name, ##args); \ > - BUG(); \ > - } while (0) > -/* Caller is supposed to guarantee no reentry. */ > -#define START_USE(_vq) \ > - do { \ > - if ((_vq)->in_use) \ > - panic("%s:in_use = %i\n", \ > - (_vq)->vq.name, (_vq)->in_use); \ > - (_vq)->in_use = __LINE__; \ > - } while (0) > -#define END_USE(_vq) \ > - do {...
2009 Aug 10
0
[PATCH 0/2] vhost: a kernel-level virtio server
This implements vhost: a kernel-level backend for virtio, The main motivation for this work is to reduce virtualization overhead for virtio by removing system calls on data path, without guest changes. For virtio-net, this removes up to 4 system calls per packet: vm exit for kick, reentry for kick, iothread wakeup for packet, interrupt injection for packet. Some more detailed description attached to the patch itself. The patches are against 2.6.31-rc4. I'd like them to go into linux-next and down the road 2.6.32 if possible. Please comment. Userspace bits using this driver...
2009 Aug 13
0
[PATCHv3 0/2] vhost: a kernel-level virtio server
This implements vhost: a kernel-level backend for virtio, The main motivation for this work is to reduce virtualization overhead for virtio by removing system calls on data path, without guest changes. For virtio-net, this removes up to 4 system calls per packet: vm exit for kick, reentry for kick, iothread wakeup for packet, interrupt injection for packet. Some more detailed description attached to the patch itself. The patches are against 2.6.31-rc4. I'd like them to go into linux-next and down the road 2.6.32 if possible. Please comment. Changelog from v2: - Comments on...
2009 Aug 27
0
[PATCHv5 0/3] vhost: a kernel-level virtio server
...ort Thanks! --- This implements vhost: a kernel-level backend for virtio, The main motivation for this work is to reduce virtualization overhead for virtio by removing system calls on data path, without guest changes. For virtio-net, this removes up to 4 system calls per packet: vm exit for kick, reentry for kick, iothread wakeup for packet, interrupt injection for packet. This driver is as minimal as possible and does not implement almost any virtio optional features, but it's fully functional (including migration support interfaces), and already shows a latency improvement over userspace. S...
2009 Aug 10
0
[PATCH 0/2] vhost: a kernel-level virtio server
This implements vhost: a kernel-level backend for virtio, The main motivation for this work is to reduce virtualization overhead for virtio by removing system calls on data path, without guest changes. For virtio-net, this removes up to 4 system calls per packet: vm exit for kick, reentry for kick, iothread wakeup for packet, interrupt injection for packet. Some more detailed description attached to the patch itself. The patches are against 2.6.31-rc4. I'd like them to go into linux-next and down the road 2.6.32 if possible. Please comment. Userspace bits using this driver...
2009 Aug 13
0
[PATCHv3 0/2] vhost: a kernel-level virtio server
This implements vhost: a kernel-level backend for virtio, The main motivation for this work is to reduce virtualization overhead for virtio by removing system calls on data path, without guest changes. For virtio-net, this removes up to 4 system calls per packet: vm exit for kick, reentry for kick, iothread wakeup for packet, interrupt injection for packet. Some more detailed description attached to the patch itself. The patches are against 2.6.31-rc4. I'd like them to go into linux-next and down the road 2.6.32 if possible. Please comment. Changelog from v2: - Comments on...
2009 Aug 27
0
[PATCHv5 0/3] vhost: a kernel-level virtio server
...ort Thanks! --- This implements vhost: a kernel-level backend for virtio, The main motivation for this work is to reduce virtualization overhead for virtio by removing system calls on data path, without guest changes. For virtio-net, this removes up to 4 system calls per packet: vm exit for kick, reentry for kick, iothread wakeup for packet, interrupt injection for packet. This driver is as minimal as possible and does not implement almost any virtio optional features, but it's fully functional (including migration support interfaces), and already shows a latency improvement over userspace. S...
2009 Nov 02
0
[PATCHv6 0/3] vhost: a kernel-level virtio server
...you think? --- This implements vhost: a kernel-level backend for virtio, The main motivation for this work is to reduce virtualization overhead for virtio by removing system calls on data path, without guest changes. For virtio-net, this removes up to 4 system calls per packet: vm exit for kick, reentry for kick, iothread wakeup for packet, interrupt injection for packet. This driver is pretty minimal, but it's fully functional (including migration support interfaces), and already shows performance (especially latency) improvement over userspace. Some more detailed description attached to th...