Displaying 10 results from an estimated 10 matches for "tapdevic".
Did you mean:
tapdevice
2012 Jan 07
2
Linux Container and Tapdev
Hi:
Recently I have some study on linux container(lxc), which is a light weight OS level
virtualization.
With the previous knowledge of tapdisk, I have an assumption that, I may could
use the vhd for each container to seperate the storage of all containers, or even in
later days, vhd is stored in distributed storage, container could be migrated.
Build filesystem on tapdev
2012 Nov 07
4
[PATCH 1/2] 4.1.2 blktap2 cleanup fixes.
---------------------------------------------------------------------------
Backport of the following patch from development:
# User Ian Campbell <[hidden email]>
# Date 1309968705 -3600
# Node ID e4781aedf817c5ab36f6f3077e44c43c566a2812
# Parent 700d0f03d50aa6619d313c1ff6aea7fd429d28a7
libxl: attempt to cleanup tapdisk processes on disk backend destroy.
This patch properly terminates the
2017 Oct 10
3
tunnel device name acquisition?
Numerous how-tos all over the Internet show how one would set up
a tunnel using ssh, e.g.:
ssh -f -o Tunnel=ethernet <server_ip> true
I was wondering if there's a way to subsequently acquire the names
of the local and remote tun/tap interfaces (e.g., using the default
"-w any:any") for subsequent automatic tunnel configuration, e.g.:
ip link set $TapDev up
ip link set
2012 Jan 13
0
How many block device in domU supports?
Hi:
I''ve been thought this for a while, I want to build 1000 linux containeres in a HVM,
and each container owns 1 block device, so there will 1000 tapdev and 1000 tapdisk
process.
My question is
1. How many devices domU support?
2. How many tapdev device blktap2 support?
3, Would it be a performance issue in this case?
4. Basically, I want
2010 Aug 13
4
[PATCH] xl: Make blktap support optional
Make blktap support optional.
Enable it by default on Linux, disable it on non-Linux.
Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
--
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85609 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo, Andrew Bowd
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen,
2011 Sep 21
1
[PATCH] libxl: attempt to cleanup tapdisk processes on disk backend destroy
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1316609964 -3600
# Node ID b43fd821d1aebc8671e684bfc285cda7a6002ff1
# Parent 206afa070919e3fe0b13a03f870ca2da44ab604a
libxl: attempt to cleanup tapdisk processes on disk backend destroy.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r 206afa070919 -r b43fd821d1ae
2012 Mar 06
0
Livelock induced failure in blktap2.
We''ve been working on getting XEN 4.1.2 validated for internal use and
have run into what appears to be a livelock induced failure in
properly freeing a blktap2 device.
We ported the blktap2 driver from Dan Stodden''s GIT tree into 3.2.x
which is a reasonably straight forward process. We are also running
the toolchain with a patch which Ian Campbell posted in order to get
xl to
2000 Oct 17
2
setup problems
Hi,
Although Im dutch too, Ill write this in english. I got a similar problem
then the one on the helpforum. Ill paste my setup first
server tincd.conf
----
ListenPort = 8089
MyOwnVPNIP = 192.168.100.1/24
#VpnMask = 255.255.255.0
TapDevice = /dev/tap0
Passphrases=/usr/local/etc/tinc/passphrases
server tapdev
----
tap0 Link encap:Ethernet HWaddr FE:FD:C0:A8:6F:01
inet addr:192.168.100.1 Bcast:192.168.100.255
Mask:255.255.255.0
UP BROADCAST RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:...
2010 Jul 19
17
BLKTAPCTRL[2375]: blktapctrl_linux.c:86: blktap0 open failed
I''m getting this message (subject line) in daemon.log every time I
start or restart xend. I''m not sure if this is related to the fact
that I cannot boot up my Windows domU from a VHD file. The Windows
domU was working fine with Xen 4.0.0 (with 2.6.32.14 dom0 kernel).
When I upgraded to Xen 4.0.1-rc4 (with 2.6.32.16 dom0 kernel), I can
no long boot the Windows domU from VHD. The
2010 Aug 12
0
[PATCH, v2]: xl: Implement per-API-call garbage-collection lifetime
Changes since v1:
- Fix a double-free bug introduced by v1, pointed out by Stefano
where internal pointer was being passed back to caller from
libxl_create_stubdom()
8<----------------------------------------------------------------------
Currently scratch variables allocated by libxl have the same lifetime as
the context. While this is suitable for one off invocations of xl. It is
not