I was investigating adding network buffering support to xl Remus. I need 1) setup an ifb device and some traffic shaping rules (packet redirection) 2) control the release of buffer every checkpoint. For the latter, i have managed to push c API support for Remus network buffer to libnl and the associated Remus network buffer module (sch_plug) into mainline kernel (3.4 onwards). However, setting up the buffer itself still requires scripting support because there is no proper C API to do this setup from xl. For eg, xl invokes a script with domain name or vif name the script does the following: "modprobe ifb" bring up an ifb device "ip link set ifb0 up" add ingress qdisc to each vif "tc qdisc add dev vif1.0 ingress" redirect traffic from vifs to ifb devices "tc filter add dev vif1.0 parent ffff: proto ip pref 10 u32 match u32 0 0 action mirred egress redirect dev ifb0" Communicate the ifb device associated with each vif back to xl. Later, xl would add "plug qdisc" to ifb0 and control buffering via c API available from libnl. Is there a general convention in the xl code base with respect to invoking external scripts and reading their output back into xl ? I checked the hotplug script implementation. While there is a reference point to external script invocation, hotplug script results are communicated back to xl using xenstore, -- standard xenstore entries. I can use xenstore too but I don''t want to pollute xenstore entries unless absolutely necessary. Or is it as simple as reading and writing to files or using standard communication primitives like named fifos or pipes ? As an alternative, Instead of adding libnl dependency to xl, I can reuse existing python code for network buffering and control the python process via unix sockets or pipes or fifos. The python code does not rely on libnl. Shriram _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
On Sun, 2013-07-14 at 10:13 -0500, Shriram Rajagopalan wrote:> I can use xenstore too but I don''t want to pollute xenstore entries > unless absolutely necessary.Ultimately it may be that the script is running in a different domain (e.g. a driver domain) so xenstore may be the best way. Can you enumerate what would be going in there? It may be that /libxl/<domid> or even /tools/remus/<domid> might be an appropriate home for it.> Or is it as simple as reading and writing to files or using standard > communication primitives like named fifos or pipes ? > > > As an alternative, > Instead of adding libnl dependency to xl,Would it be that bad for xl (or perhaps better libxl) to depend on libnl?> I can reuse existing python code for network buffering and control the > python process via unix sockets or pipes or fifos. The python code > does not rely on libnl.I don''t think we want to go down this route. Ian.
On Wed, Jul 17, 2013 at 9:57 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:> > On Sun, 2013-07-14 at 10:13 -0500, Shriram Rajagopalan wrote: > > > I can use xenstore too but I don''t want to pollute xenstore entries > > unless absolutely necessary. > > Ultimately it may be that the script is running in a different domain > (e.g. a driver domain) so xenstore may be the best way. > > Can you enumerate what would be going in there? It may be > that /libxl/<domid> or even /tools/remus/<domid> might be an appropriate > home for it. >driver domains is something that I didnt think about. interesting idea though. basically, if I were to use xenstore as a communication medium between the script & xl-remus, things that xl would need as return values from the script (via xenstore) are the ifb devices that map to each of the vifs belonging to the guest, so that plug qdiscs can be installed on all of them. Simplest (and most common case): install network buffer on all interfaces in the guest and obtain relevant ifb devices: xl forks "/etc/xen/scripts/remus-netbuf <domid> install" script installs ifbs on all vifs script writes to xenstore . e.g., /libxl/domid/remus/netbufs="ifb0,ifb2,ifb6" xl reads /libxl/domid/remus/netbufs and installs plug_qdisc on all these ifb devices and moves on. on cleanup (i.e., backup failure) xl forks "/etc/xen/scripts/remus-netbuf <domid> uninstall" In order to allow the possibility of installing network buffer on select vifs: script writes to xenstore in the following way: /libxl/domid/remus/netbufs/vifX.Y=ifb0 /libxl/domid/remus/netbufs/vifX1.Y1=ifb2 followed by xl reading each of the entries under remus/netbufs. This is basically like the hotplug script stuff, except that its done on demand when remus is started.> > Or is it as simple as reading and writing to files or using standard > > communication primitives like named fifos or pipes ? > > > > > > As an alternative, > > Instead of adding libnl dependency to xl, > > Would it be that bad for xl (or perhaps better libxl) to depend on > libnl? >Maybe not. anything above libnl 3.2.8 should do.. (unfortunately, debian wheezy still is at 3.2.7. same applies to ubuntu versions before 13.04).> > I can reuse existing python code for network buffering and control the > > python process via unix sockets or pipes or fifos. The python code > > does not rely on libnl. > > I don''t think we want to go down this route. >Why ? Not only will we be complicating xl code with netlink stuff, but we will also be adding more code related to drbd/blktap3 or so on. My intention was to factor out the generic parts of existing remus-python code base (most of it is in tools/remus and tools/python/xen/remus/device.py) and simply fork off the script from xl. The two can communicate using a simple pipe. There wont be any legacy dependencies on xend whatsoever. Most of that code base is self contained.> Ian. > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
On Wed, Jul 17, 2013 at 5:39 PM, Shriram Rajagopalan <rshriram@gmail.com> wrote:> On Wed, Jul 17, 2013 at 9:57 AM, Ian Campbell <Ian.Campbell@citrix.com> > wrote: >> >> On Sun, 2013-07-14 at 10:13 -0500, Shriram Rajagopalan wrote: >> >> > I can use xenstore too but I don''t want to pollute xenstore entries >> > unless absolutely necessary. >> >> Ultimately it may be that the script is running in a different domain >> (e.g. a driver domain) so xenstore may be the best way. >> >> Can you enumerate what would be going in there? It may be >> that /libxl/<domid> or even /tools/remus/<domid> might be an appropriate >> home for it. >> > > driver domains is something that I didnt think about. interesting idea > though. > > basically, if I were to use xenstore as a communication medium between > the script & xl-remus, things that xl would need as return values from the > script > (via xenstore) are the ifb devices that map to each of the vifs belonging to > the guest, > so that plug qdiscs can be installed on all of them.Just to check, are you making a distinction here between xl and libxl? Ideally the important functionality to do Remus would be in libxl, so that any toolstack could impliment it. xl is meant to represent an example toolstack one might build; it in theory one should be able to implement Remus using libvirt or xapi without significant effort. -G
On Thu, 2013-07-18 at 11:53 +0100, George Dunlap wrote:> On Wed, Jul 17, 2013 at 5:39 PM, Shriram Rajagopalan <rshriram@gmail.com> wrote: > > On Wed, Jul 17, 2013 at 9:57 AM, Ian Campbell <Ian.Campbell@citrix.com> > > wrote: > >> > >> On Sun, 2013-07-14 at 10:13 -0500, Shriram Rajagopalan wrote: > >> > >> > I can use xenstore too but I don''t want to pollute xenstore entries > >> > unless absolutely necessary. > >> > >> Ultimately it may be that the script is running in a different domain > >> (e.g. a driver domain) so xenstore may be the best way. > >> > >> Can you enumerate what would be going in there? It may be > >> that /libxl/<domid> or even /tools/remus/<domid> might be an appropriate > >> home for it. > >> > > > > driver domains is something that I didnt think about. interesting idea > > though. > > > > basically, if I were to use xenstore as a communication medium between > > the script & xl-remus, things that xl would need as return values from the > > script > > (via xenstore) are the ifb devices that map to each of the vifs belonging to > > the guest, > > so that plug qdiscs can be installed on all of them. > > Just to check, are you making a distinction here between xl and libxl? > Ideally the important functionality to do Remus would be in libxl, so > that any toolstack could impliment it. xl is meant to represent an > example toolstack one might build; it in theory one should be able to > implement Remus using libvirt or xapi without significant effort.Right, and that''s why I would prefer to avoid a dependency on Python. Since I think at least some of these projects will see it as an additional barrier. On the other hand if its just an implementation detail of a remus specific script which libxl happens to call out to when asked then I suppose it is up to the Remus folks whether they find this acceptable. Ian.
On Thu, Jul 18, 2013 at 7:10 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:> On Thu, 2013-07-18 at 11:53 +0100, George Dunlap wrote: > > On Wed, Jul 17, 2013 at 5:39 PM, Shriram Rajagopalan <rshriram@gmail.com> > wrote: > > > On Wed, Jul 17, 2013 at 9:57 AM, Ian Campbell <Ian.Campbell@citrix.com > > > > > wrote: > > >> > > >> On Sun, 2013-07-14 at 10:13 -0500, Shriram Rajagopalan wrote: > > >> > > >> > I can use xenstore too but I don''t want to pollute xenstore entries > > >> > unless absolutely necessary. > > >> > > >> Ultimately it may be that the script is running in a different domain > > >> (e.g. a driver domain) so xenstore may be the best way. > > >> > > >> Can you enumerate what would be going in there? It may be > > >> that /libxl/<domid> or even /tools/remus/<domid> might be an > appropriate > > >> home for it. > > >> > > > > > > driver domains is something that I didnt think about. interesting idea > > > though. > > > > > > basically, if I were to use xenstore as a communication medium between > > > the script & xl-remus, things that xl would need as return values from > the > > > script > > > (via xenstore) are the ifb devices that map to each of the vifs > belonging to > > > the guest, > > > so that plug qdiscs can be installed on all of them. > > > > Just to check, are you making a distinction here between xl and libxl? > > Ideally the important functionality to do Remus would be in libxl, so > > that any toolstack could impliment it. xl is meant to represent an > > example toolstack one might build; it in theory one should be able to > > implement Remus using libvirt or xapi without significant effort. > >That helps a lot. I somehow got a wrong impression that xl was part of libxl (sort of a frontend). Right, and that''s why I would prefer to avoid a dependency on Python.> Since I think at least some of these projects will see it as an > additional barrier. > > On the other hand if its just an implementation detail of a remus > specific script which libxl happens to call out to when asked then I > suppose it is up to the Remus folks whether they find this acceptable. > >correction. This script will be called by xl not libxl. As George put it, other toolstacks may choose to do this setup in their own way. If there was a decent way of doing stuff like "modprobe ifb" or "tc filter add dev vif1.0 parent root u32 match u32 0 0 .." in C, I would have jumped on it. As it turns it, there isnt one. So, if you folks are okay with "xl" code doing things like system("modprobe ifb numifbs=10") system("ip link set ifbX up") system("tc filter add dev vif1.0 ingress") system("tc filter add dev vif1.0 parent ffff: proto ip pref 10 u32 match u32 0 0 action mirred egress redirect dev ifbX") I can get rid of the script all together. I thought you guys would find this distasteful ;) and hence the decision to invoke an external script. OTOH, the only thing that libxl needs is a list of ifb devices to install the qdisc. This will be done programmatically. Ian.> > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
On Thu, 2013-07-18 at 09:26 -0400, Shriram Rajagopalan wrote:> Right, and that''s why I would prefer to avoid a dependency on > Python. > Since I think at least some of these projects will see it as > an > additional barrier. > > On the other hand if its just an implementation detail of a > remus > specific script which libxl happens to call out to when asked > then I > suppose it is up to the Remus folks whether they find this > acceptable. > > > > correction. This script will be called by xl not libxl. As George put > it, other toolstacks > may choose to do this setup in their own way.Actually I think this is exactly the sort of complexity which libxl serves to remove from all toolstacks. If they all need to do it then it belongs in libxl.> system("modprobe ifb numifbs=10") > system("ip link set ifbX up")These two should be part of the required host configuration I think. Along the same lines as how we pushed general host networking setup out of the toolstack and into the administrators capable hands, e.g http://wiki.xen.org/wiki/Network_Configuration_Examples_%28Xen_4.1%2B%29 IOW just document it, same as we document "create xenbr0"> system("tc filter add dev vif1.0 ingress") > system("tc filter add dev vif1.0 parent ffff: proto ip pref 10 u32 > match u32 0 0 action mirred egress redirect dev ifbX")These should be part of the existing vif hotplug scripts (called from libxl), shouldn''t they? Perhaps based on a new vif parameter to specify the ifbX Ian.
On Thu, Jul 18, 2013 at 9:33 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:> On Thu, 2013-07-18 at 09:26 -0400, Shriram Rajagopalan wrote: > > > Right, and that''s why I would prefer to avoid a dependency on > > Python. > > Since I think at least some of these projects will see it as > > an > > additional barrier. > > > > On the other hand if its just an implementation detail of a > > remus > > specific script which libxl happens to call out to when asked > > then I > > suppose it is up to the Remus folks whether they find this > > acceptable. > > > > > > > > correction. This script will be called by xl not libxl. As George put > > it, other toolstacks > > may choose to do this setup in their own way. > > Actually I think this is exactly the sort of complexity which libxl > serves to remove from all toolstacks. If they all need to do it then it > belongs in libxl. > > > system("modprobe ifb numifbs=10") >Only this can go into the host config toolstack.> > system("ip link set ifbX up") > >the ifbX is an example.. the ifb module names the interfaces ifb0 to ifbN where N is determined by modprobe ifb numifbs=N. Now, lets say we have 10 ifbs in the system. which ones do we pick for the guest ? if the VM has 3 interfaces, we need 3 ifbs, ifb0-2. Thats easy. What if there are two remus streams in the same system ? then we need to maintain a list of ifbs that are being used and which ones are free. The current remus python code (tools/python/xen/remus/device.py) has some code to do this ("class Netbufpool"). Something similar needs to go into libxl.> These two should be part of the required host configuration I think. > Along the same lines as how we pushed general host networking setup out > of the toolstack and into the administrators capable hands, e.g > http://wiki.xen.org/wiki/Network_Configuration_Examples_%28Xen_4.1%2B%29 > > IOW just document it, same as we document "create xenbr0" > > > system("tc filter add dev vif1.0 ingress") > > system("tc filter add dev vif1.0 parent ffff: proto ip pref 10 u32 > > match u32 0 0 action mirred egress redirect dev ifbX") > > These should be part of the existing vif hotplug scripts (called from > libxl), shouldn''t they?They don''t belong in the vif-hotplug script. Adding these lines means that all egress traffic from the VM will be routed via the IFB device whether or not Remus is running. I don''t think people would want that.> Perhaps based on a new vif parameter to specify > the ifbX > >See previous explanation on finding a free IFB device. If we throw the responsibility of specifying IFB devices onto the admin, libxl can basically do two system() calls, as stated above and install the plug_qdisc and move on. Ian.> > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
On 18/07/13 15:51, Shriram Rajagopalan wrote:> On Thu, Jul 18, 2013 at 9:33 AM, Ian Campbell <Ian.Campbell@citrix.com > <mailto:Ian.Campbell@citrix.com>> wrote: > > On Thu, 2013-07-18 at 09:26 -0400, Shriram Rajagopalan wrote: > > > Right, and that''s why I would prefer to avoid a > dependency on > > Python. > > Since I think at least some of these projects will see it as > > an > > additional barrier. > > > > On the other hand if its just an implementation detail of a > > remus > > specific script which libxl happens to call out to when > asked > > then I > > suppose it is up to the Remus folks whether they find this > > acceptable. > > > > > > > > correction. This script will be called by xl not libxl. As > George put > > it, other toolstacks > > may choose to do this setup in their own way. > > Actually I think this is exactly the sort of complexity which libxl > serves to remove from all toolstacks. If they all need to do it > then it > belongs in libxl. > > > system("modprobe ifb numifbs=10") > > > > Only this can go into the host config toolstack.I''m not sure what you mean by "host config toolstack"; but the idea is to treat this configuration just like the bridging or OVS -- one thing set up at the beginning which the toolstack doesn''t bother about.> > system("ip link set ifbX up") > > > the ifbX is an example.. the ifb module names the interfaces ifb0 to > ifbN where N is > determined by modprobe ifb numifbs=N.> > Now, lets say we have 10 ifbs in the system. which ones do we pick for > the guest ? > if the VM has 3 interfaces, we need 3 ifbs, ifb0-2. Thats easy. > > What if there are two remus streams in the same system ? then we need > to maintain > a list of ifbs that are being used and which ones are free. The > current remus python code > (tools/python/xen/remus/device.py) has some code to do this ("class > Netbufpool"). > Something similar needs to go into libxl.This is exactly like assigning vif numbers to guests, isn''t it?> > These two should be part of the required host configuration I think. > Along the same lines as how we pushed general host networking > setup out > of the toolstack and into the administrators capable hands, e.g > http://wiki.xen.org/wiki/Network_Configuration_Examples_%28Xen_4.1%2B%29 > > IOW just document it, same as we document "create xenbr0" > > > system("tc filter add dev vif1.0 ingress") > > system("tc filter add dev vif1.0 parent ffff: proto ip pref 10 u32 > > match u32 0 0 action mirred egress redirect dev ifbX") > > These should be part of the existing vif hotplug scripts (called from > libxl), shouldn''t they? > > > They don''t belong in the vif-hotplug script. > Adding these lines means that all egress traffic from the VM will be > routed via the IFB device > whether or not Remus is running. I don''t think people would want that.No, the idea is that if an ifbX argument is present, you use it; otherwise not. Since this needs to be done exactly once per interface before it can be considered fully set up, the vif script is the natural place to put it.> Perhaps based on a new vif parameter to specify > the ifbX > > > See previous explanation on finding a free IFB device. > > If we throw the responsibility of specifying IFB devices onto the > admin, libxl can basically > do two system() calls, as stated above and install the plug_qdisc and > move on.If we throw that to the admin it''s a royal pain to use. This can easily be done by a computer; an admin has better things to spend his mental energy on. -George _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
On Thu, Jul 18, 2013 at 11:02 AM, George Dunlap <george.dunlap@eu.citrix.com> wrote:> On 18/07/13 15:51, Shriram Rajagopalan wrote: > > On Thu, Jul 18, 2013 at 9:33 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote: > >> On Thu, 2013-07-18 at 09:26 -0400, Shriram Rajagopalan wrote: >> >> > Right, and that''s why I would prefer to avoid a dependency on >> > Python. >> > Since I think at least some of these projects will see it as >> > an >> > additional barrier. >> > >> > On the other hand if its just an implementation detail of a >> > remus >> > specific script which libxl happens to call out to when asked >> > then I >> > suppose it is up to the Remus folks whether they find this >> > acceptable. >> > >> > >> > >> > correction. This script will be called by xl not libxl. As George put >> > it, other toolstacks >> > may choose to do this setup in their own way. >> >> Actually I think this is exactly the sort of complexity which libxl >> serves to remove from all toolstacks. If they all need to do it then it >> belongs in libxl. >> >> > system("modprobe ifb numifbs=10") >> > > > Only this can go into the host config toolstack. > > > I''m not sure what you mean by "host config toolstack"; but the idea is to > treat this configuration just like the bridging or OVS -- one thing set up > at the beginning which the toolstack doesn''t bother about. > > >Yep. thats what I meant.> > >> > system("ip link set ifbX up") >> >> > the ifbX is an example.. the ifb module names the interfaces ifb0 to > ifbN where N is > determined by modprobe ifb numifbs=N. > > > > Now, lets say we have 10 ifbs in the system. which ones do we pick for > the guest ? > if the VM has 3 interfaces, we need 3 ifbs, ifb0-2. Thats easy. > > What if there are two remus streams in the same system ? then we need to > maintain > a list of ifbs that are being used and which ones are free. The current > remus python code > (tools/python/xen/remus/device.py) has some code to do this ("class > Netbufpool"). > Something similar needs to go into libxl. > > > This is exactly like assigning vif numbers to guests, isn''t it? > > >Unfortunately no. the IFB names are fixed. If I load the IFB module during "physical" host boot, with modprobe ifb numifbs=20, the module creates a set of 20 ifb devices apriori. No dynamic creation/deletion like vifs. and no ability to control the naming (they are static: ifb0, ifb1, ... ifb19). May be we could alias it but that doesnt solve the ifb-pool requirement.> > > >> These two should be part of the required host configuration I think. >> Along the same lines as how we pushed general host networking setup out >> of the toolstack and into the administrators capable hands, e.g >> http://wiki.xen.org/wiki/Network_Configuration_Examples_%28Xen_4.1%2B%29 >> >> IOW just document it, same as we document "create xenbr0" >> >> > system("tc filter add dev vif1.0 ingress") >> > system("tc filter add dev vif1.0 parent ffff: proto ip pref 10 u32 >> > match u32 0 0 action mirred egress redirect dev ifbX") >> >> These should be part of the existing vif hotplug scripts (called from >> libxl), shouldn''t they? > > > They don''t belong in the vif-hotplug script. > Adding these lines means that all egress traffic from the VM will be > routed via the IFB device > whether or not Remus is running. I don''t think people would want that. > > > No, the idea is that if an ifbX argument is present, you use it; otherwise > not. Since this needs to be done exactly once per interface before it can > be considered fully set up, the vif script is the natural place to put it. > > >I see your point. I was just leaning towards doing this setup (i.e. ingress filtering and traffic redirection to IFB device) only when staring Remus and not when starting the domain. But as you said, if the user specifies that he wishes that interface output to be buffered, we might as well do this setup in the vif script itself. It incurs very little overhead without Remus anyway.> Perhaps based on a new vif parameter to specify >> the ifbX >> >> > See previous explanation on finding a free IFB device. > > If we throw the responsibility of specifying IFB devices onto the admin, > libxl can basically > do two system() calls, as stated above and install the plug_qdisc and move > on. > > > If we throw that to the admin it''s a royal pain to use. This can easily > be done by a computer; an admin has better things to spend his mental > energy on. > >I agree. Which is why that netbufpool code was added to Remus long time back. Since the ifbs have to be allocated dynamically from the pool, here is what I suggest: in the domain config: vifs = [''mac=xxxxx,bridge=yyyy,buffer=yes''] in the hotplug script, we can allocate an ifb from the ifb pool and assign an IFB to this vif and put this mapping into xenstore. xl can later read this mapping, and do the rest when remus is enabled. This would allow the user to specify selectively the set of interfaces on which output buffering should be installed.> -George >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
On Thu, 2013-07-18 at 10:51 -0400, Shriram Rajagopalan wrote:> On Thu, Jul 18, 2013 at 9:33 AM, Ian Campbell > <Ian.Campbell@citrix.com> wrote: > On Thu, 2013-07-18 at 09:26 -0400, Shriram Rajagopalan wrote: > > > Right, and that''s why I would prefer to avoid a > dependency on > > Python. > > Since I think at least some of these projects will > see it as > > an > > additional barrier. > > > > On the other hand if its just an implementation > detail of a > > remus > > specific script which libxl happens to call out to > when asked > > then I > > suppose it is up to the Remus folks whether they > find this > > acceptable. > > > > > > > > correction. This script will be called by xl not libxl. As > George put > > it, other toolstacks > > may choose to do this setup in their own way. > > > Actually I think this is exactly the sort of complexity which > libxl > serves to remove from all toolstacks. If they all need to do > it then it > belongs in libxl. > > > system("modprobe ifb numifbs=10") > > > > > > Only this can go into the host config toolstack. > > > system("ip link set ifbX up") > > > > > the ifbX is an example.. the ifb module names the interfaces ifb0 to > ifbN where N is > determined by modprobe ifb numifbs=N. > > > Now, lets say we have 10 ifbs in the system. which ones do we pick for > the guest ? > if the VM has 3 interfaces, we need 3 ifbs, ifb0-2. Thats easy. > > > What if there are two remus streams in the same system ? then we need > to maintain a list of ifbs that are being used and which ones are > free.Do we? I''d have thought just trying them until we find a free one would suffice? Maintaining a pool would require some sort of central arbiter which doesn''t (necessarily) exist in a system using libxl (it''s toolstack specific). Or maybe xenstore could be used to record which ifbs are in use.> They don''t belong in the vif-hotplug script. > Adding these lines means that all egress traffic from the VM will be > routed via the IFB device whether or not Remus is running. I don''t > think people would want that.Surely the hotplug script could trivially do: if (remus_is_enabled(dom)): ifb = find_me_a_free_ifb() apply_ifb(vif, ifb)> If we throw the responsibility of specifying IFB devices onto the > admin, libxl can basically > do two system() calls, as stated above and install the plug_qdisc and > move on.Now that I understand how it fits together I don''t think asking the host admin to allocate these particular resources is a good idea. Ian