1/ Anchor vNic, the equivalent of linux dummy interfaces, we need more flexibility in the way we setup xen networking. What is sad is that the code is already available in the unreleased crossbow bits... but it won''t appear in nevada until Q1 2008 :( This is a real blocker for me as my ISP just started implementing port security and locks my connection everytime it sees a foreign mac address using one of the IP addresses that were originally assigned to my dom0. On linux, I can setup a dummy interface and create a bridge with it for a domU but on Solaris I need a physical NIC per bridge !$!! @#$! For this particular feature, I am ready to give a few hundred dollars as booty if anyone has a workaround. 2/ Pci passthru, this is really useful so you can let a domU access a PCI card. It comes really handy if you want to virtualize a PBX that is using cheap zaptel FXO cards. Again on linux, xen pci passthru has been available for a while. Last time I mention this on the xen solaris discussion, I received a very dry reply. 3/ Problem with DMA under Xen ... e.g. my areca raid cards works perfect on a 8GB box without xen but because of the way xen allocates memory... I am forced to allocate only 1 or 2 gig for the dom0 or the areca drivers will fail miserably trying to do DMA above the first 4G address space. This very same problem affected xen under linux over a year ago and seems to have been addressed. Several persons on the ZFS discuss list who complain about poor ZFS IO performance are affected by this issue. 4/ Poor exploit mitigation under Solaris. In comparaison, OpenBSD, grsec linux and Windows => XP SP2 have really good exploit mitigation.... It is a shame because solaris offered a non-exec stack before nearly everyone else... but it stopped there... no heap protection, etc... The only thing that is preventing me from switching back to linux (no zfs), freebsd (no xen) or openbsd (no xen and no zfs), right now is ZFS and it is the same reason I switched to Solaris in the first place.
K wrote:> 4/ Poor exploit mitigation under Solaris. In comparaison, OpenBSD, > grsec linux and Windows => XP SP2 have really good exploit > mitigation.... It is a shame because solaris offered a non-exec stack > before nearly everyone else... but it stopped there... no heap > protection, etc...Have you looked at privileges(5) and in particular look at how little privilege many of the system daemons run with - sometimes even *less* privilege than an normal user login. Heap protection isn''t the only way and it only protects against certain types of exploit. It doesn''t help protect against logic flaws that get a program to do something it shouldn''t but could but without giving it new code to run. Though what this has to do with xen or zfs I don''t know this is a topic that would be better for security-discuss, so I''ve set the reply-to there. -- Darren J Moffat
[ Removed zfs-discuss as it''s not relevant. ] On 28 Nov 2007, at 1:38pm, K wrote:> 1/ Anchor vNic, the equivalent of linux dummy interfaces, we need more > flexibility in the way we setup xen networking. What is sad is that > the code is already available in the unreleased crossbow bits... but > it won''t appear in nevada until Q1 2008 :(If we build a dummy driver to provide additional pseudo-physical interfaces, would you be willing to test it? dme.
K wrote:> 1/ Anchor vNic, the equivalent of linux dummy interfaces, we need more > flexibility in the way we setup xen networking. What is sad is that > the code is already available in the unreleased crossbow bits... but > it won''t appear in nevada until Q1 2008 :( > > This is a real blocker for me as my ISP just started implementing port > security and locks my connection everytime it sees a foreign mac > address using one of the IP addresses that were originally assigned to > my dom0. On linux, I can setup a dummy interface and create a bridge > with it for a domU but on Solaris I need a physical NIC per bridge !$!! > @#$! > > For this particular feature, I am ready to give a few hundred dollars > as booty if anyone has a workaround.work in progress.. Highly unlikely we will wait until Crossbow is integrated before we have this functionality.> 2/ Pci passthru, this is really useful so you can let a domU access a > PCI card. It comes really handy if you want to virtualize a PBX that > is using cheap zaptel FXO cards. Again on linux, xen pci passthru has > been available for a while. Last time I mention this on the xen > solaris discussion, I received a very dry reply.This has been low on our priority list. We do plan on doing it relatively soon, but to date, not a lot of customers have asked for it (for use in a production environment). We''ll probably start on Solaris domU pass through support within a month or two and then do dom0 support after that. It just comes down to when folks free up from other xVM related work to do the code.> 3/ Problem with DMA under Xen ... e.g. my areca raid cards works > perfect on a 8GB box without xen but because of the way xen allocates > memory... I am forced to allocate only 1 or 2 gig for the dom0 or the > areca drivers will fail miserably trying to do DMA above the first 4G > address space. This very same problem affected xen under linux over a > year ago and seems to have been addressed. Several persons on the ZFS > discuss list who complain about poor ZFS IO performance are affected > by this issue.This should be relatively easy to fix assuming I can get access to similar H/W. Do you get any error messages? We do have a bug in contig alloc (allocs too much memory) which was recently found which is affecting nv_sata based systems. It may be related to that or something that the driver could be doing better. Can you send me more details around your setup (card your using, what''s connected to it, where you got the driver and what version you have), behavior and perf on metal, behavior and perf on xVM.> 4/ Poor exploit mitigation under Solaris. In comparaison, OpenBSD, > grsec linux and Windows => XP SP2 have really good exploit > mitigation.... It is a shame because solaris offered a non-exec stack > before nearly everyone else... but it stopped there... no heap > protection, etc... > > The only thing that is preventing me from switching back to linux (no > zfs), freebsd (no xen) or openbsd (no xen and no zfs), right now is > ZFS and it is the same reason I switched to Solaris in the first place.I''ll let the security folks handle this :-) MRJ
Mark Johnson wrote:> K wrote:...>> 3/ Problem with DMA under Xen ... e.g. my areca raid cards works >> perfect on a 8GB box without xen but because of the way xen allocates >> memory... I am forced to allocate only 1 or 2 gig for the dom0 or the >> areca drivers will fail miserably trying to do DMA above the first 4G >> address space. This very same problem affected xen under linux over a >> year ago and seems to have been addressed. Several persons on the ZFS >> discuss list who complain about poor ZFS IO performance are affected >> by this issue. > > This should be relatively easy to fix assuming I can get > access to similar H/W. > > Do you get any error messages? We do have a bug in contig alloc > (allocs too much memory) which was recently found which is > affecting nv_sata based systems. It may be related to that > or something that the driver could be doing better. > > Can you send me more details around your setup (card your > using, what''s connected to it, where you got the driver > and what version you have), behavior and perf on metal, > behavior and perf on xVM.Please keep me in the loop on any Areca "arcmsr" issues that you come across - I''m working on integrating this driver into ON so feedback will be valuable. thanks in advance, James -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
On Wednesday 28 November 2007 05:38:34 K wrote:> 1/ Anchor vNic, the equivalent of linux dummy interfaces, we need more > flexibility in the way we setup xen networking. What is sad is that > the code is already available in the unreleased crossbow bits... but > it won''t appear in nevada until Q1 2008 :( > > This is a real blocker for me as my ISP just started implementing port > security and locks my connection everytime it sees a foreign mac > address using one of the IP addresses that were originally assigned to > my dom0. On linux, I can setup a dummy interface and create a bridge > with it for a domU but on Solaris I need a physical NIC per bridge !$!! > @#$! > > For this particular feature, I am ready to give a few hundred dollars > as booty if anyone has a workaround. > > 2/ Pci passthru, this is really useful so you can let a domU access a > PCI card. It comes really handy if you want to virtualize a PBX that > is using cheap zaptel FXO cards. Again on linux, xen pci passthru has > been available for a while. Last time I mention this on the xen > solaris discussion, I received a very dry reply. > > 3/ Problem with DMA under Xen ... e.g. my areca raid cards works > perfect on a 8GB box without xen but because of the way xen allocates > memory... I am forced to allocate only 1 or 2 gig for the dom0 or the > areca drivers will fail miserably trying to do DMA above the first 4G > address space. This very same problem affected xen under linux over a > year ago and seems to have been addressed. Several persons on the ZFS > discuss list who complain about poor ZFS IO performance are affected > by this issue. > > 4/ Poor exploit mitigation under Solaris. In comparaison, OpenBSD, > grsec linux and Windows => XP SP2 have really good exploit > mitigation.... It is a shame because solaris offered a non-exec stack > before nearly everyone else... but it stopped there... no heap > protection, etc... > > The only thing that is preventing me from switching back to linux (no > zfs), freebsd (no xen) or openbsd (no xen and no zfs), right now is > ZFS and it is the same reason I switched to Solaris in the first place. > > > > _______________________________________________ > xen-discuss mailing list > xen-discuss@opensolaris.orgFreeBSD has had Xen for a while. It''s still lacking in a few area though. http://txrx.org/xen/ ZFS and DTrace are on Mac OS X Leopard, what''s your point? There''s QEmu, Parallels, VMware, VirtualBox, and Bochs, and the kernel sources are under an OSI approved license so it theorietically could be implemented on there too, but there''s no point. Solaris needs Xen because VMware won''t have anything to do with them, even though it''s an enterprise product, Linux users are spoiled and have no technical advantages to grant the right to be spoiled yet they get a slew of other things first, it''s all propoganda, liberal licenses and blind sheep driving the core projects, and they got multiple large companies worth more than Sun to chip in. PCI passthrough is hardly easily done, and as others replied, not a huge priority, I personally don''t need it, and most people I know who use Xen on Linux don''t even use it. The Xen port hasn''t been in development for years like it has on Linux, give them a break. OpenSolaris has only been available since mid-2005, and hasn''t been usable until 2006, they''ve come a long way, and there''s many areas to address. Linux has had 16 years, and it took 12 to become somewhat viable, especially for general users. OpenBSD is just a rebranded NetBSD made dog slow. Point being, it''ll all come in due time, if you need it now, just use the OS you need to get it done, live without ZFS if you have to, if Xen is so important to you. As for security, this is hardly the right place to put that, and you obviously don''t know about any of the Solaris security technologies. James
ok cc''ing the zfs discuss was probably a mistake. However I don''t like the way you troll me and single out point 4, while the other 3 points are directly related to Xen. point 1, I can''t set migrate a xen domU from a linux dom0 because it is impossible to keep the previous network configuration without adding hardware (an extra network card.) so much for virtualization! point 2 pci passthrough is an original xen feature that is missing in xVm. point 3 on solaris xen breaks when your system when you have over 4 gig of ram... the same drivers work fine when you''re not running the xen hypervisor. point 4 was just another thing that annoys me a little about solaris. I have mentioned that there as been some progress recently in a previous thread but solaris still has a lot of work to do to improve its security. Do I need to remind you the recent -froot telnet bug? This message posted from opensolaris.org
ok cc''ing the zfs discuss was probably a mistake. However I don''t like the way you troll me and single out point 4, while the other 3 points are directly related to Xen. point 1, I can''t set migrate a xen domU from a linux dom0 because it is impossible to keep the previous network configuration without adding hardware (an extra network card.) so much for virtualization! point 2 pci passthrough is an original xen feature that is missing in xVm. point 3 on solaris xen breaks when your system when you have over 4 gig of ram... the same drivers work fine when you''re not running the xen hypervisor. point 4 was just another thing that annoys me a little about solaris. I have mentioned that there as been some progress recently in a previous thread but solaris still has a lot of work to do to improve its security. Do I need to remind you the recent -froot telnet bug? This message posted from opensolaris.org
above post was meant directly to darren. misfire I am sorry thanks for the sensible answers from the others This message posted from opensolaris.org
K wrote:>> OpenBSD is just a rebranded NetBSD made dog slow. >> >> Point being, it''ll all come in due time, if you need it now, just use >> the OS >> you need to get it done, live without ZFS if you have to, if Xen is so >> important to you. As for security, this is hardly the right place to >> put >> that, and you obviously don''t know about any of the Solaris security >> technologies. >> >> James > > You''re such a troll :) Solaris security technologies... you mean stuff > like the recent -froot telnet bug? > > I am very familiar with Solaris security features... and exploit > mitigation is missing... > I don''t care there are zones and least privilege features when I can''t > protect my system from a simple heap overflow. > > There are tons of programmers out there working for Sun, Microsoft, > Oracle and open source projects who are incapable of writing secure > code and that''s not going to change any time soon. At least on a linux > grsec kernel, on openbsd or on a windows xp sp2 most of their shitty > code will be harmless. > >I''m not trolling. You''re the one who is. Linux is the system made by most of those amateurs you know, and grsec is hardly default. Let''s not bring up the divide between projects, grsec, selinux, and plenty of tools that merely duplicate already existant features in others. That telnet bug only affects Solaris 10 U3, which isn''t enabled by default anyways. I''m done talking with you, if you want things to be fixed, be professional and report the problems in a neutral way to the respective communities responsible for those areas. Simple stereotyping the whole system, all of Sun''s developers and the community as being behind the times, dumb, or without care is just negligent. We can do without you. If you want to help, help, otherwise take your trolling elsewhere. Write your own crappy zfs implementation, I don''t care, go away. James
Regarding the following that I also hit, see http://www.opensolaris.org/jive/thread.jspa?messageID=180995 and if any further details or tests are required, I would be happy to assist.> 3/ Problem with DMA under Xen ... e.g. my areca raid cards works > perfect on a 8GB box without xen but because of the way xen allocates > memory... I am forced to allocate only 1 or 2 gig for the dom0 or the > areca drivers will fail miserably trying to do DMA above the first 4G > address space. This very same problem affected xen under linux over a > year ago and seems to have been addressed. Several persons on the ZFS > discuss list who complain about poor ZFS IO performance are affected > by this issue.This should be relatively easy to fix assuming I can get access to similar H/W. Do you get any error messages? We do have a bug in contig alloc (allocs too much memory) which was recently found which is affecting nv_sata based systems. It may be related to that or something that the driver could be doing better. Can you send me more details around your setup (card your using, what''s connected to it, where you got the driver and what version you have), behavior and perf on metal, behavior and perf on xVM. This message posted from opensolaris.org
Please excluse that my comment is cross-posted to both the Xen and zfs discuss forums, I didn''t notice it on the Xen forum until I had posted. Anyway, here goes. Any info, testing, etc please contact me. Regarding the following that I also hit, see http://www.opensolaris.org/jive/thread.jspa?messageID=180995 and if any further details or tests are required, I would be happy to assist.> 3/ Problem with DMA under Xen ... e.g. my areca raid cards works > perfect on a 8GB box without xen but because of the way xen allocates > memory... I am forced to allocate only 1 or 2 gig for the dom0 or the > areca drivers will fail miserably trying to do DMA above the first 4G > address space. This very same problem affected xen under linux over a > year ago and seems to have been addressed. Several persons on the ZFS > discuss list who complain about poor ZFS IO performance are affected > by this issue.This should be relatively easy to fix assuming I can get access to similar H/W. Do you get any error messages? We do have a bug in contig alloc (allocs too much memory) which was recently found which is affecting nv_sata based systems. It may be related to that or something that the driver could be doing better. Can you send me more details around your setup (card your using, what''s connected to it, where you got the driver and what version you have), behavior and perf on metal, behavior and perf on xVM. This message posted from opensolaris.org
Please take this discussion off alias. The folks who work on xVM @ Sun like both Linux and Solaris. A lot of use use both on a daily bases. Thanks, MRJ James Cornell wrote:> K wrote: >>> OpenBSD is just a rebranded NetBSD made dog slow. >>> >>> Point being, it''ll all come in due time, if you need it now, just use >>> the OS >>> you need to get it done, live without ZFS if you have to, if Xen is so >>> important to you. As for security, this is hardly the right place to >>> put >>> that, and you obviously don''t know about any of the Solaris security >>> technologies. >>> >>> James >> You''re such a troll :) Solaris security technologies... you mean stuff >> like the recent -froot telnet bug? >> >> I am very familiar with Solaris security features... and exploit >> mitigation is missing... >> I don''t care there are zones and least privilege features when I can''t >> protect my system from a simple heap overflow. >> >> There are tons of programmers out there working for Sun, Microsoft, >> Oracle and open source projects who are incapable of writing secure >> code and that''s not going to change any time soon. At least on a linux >> grsec kernel, on openbsd or on a windows xp sp2 most of their shitty >> code will be harmless. >> >> > I''m not trolling. You''re the one who is. Linux is the system made by > most of those amateurs you know, and grsec is hardly default. Let''s not > bring up the divide between projects, grsec, selinux, and plenty of > tools that merely duplicate already existant features in others. That > telnet bug only affects Solaris 10 U3, which isn''t enabled by default > anyways. I''m done talking with you, if you want things to be fixed, be > professional and report the problems in a neutral way to the respective > communities responsible for those areas. Simple stereotyping the whole > system, all of Sun''s developers and the community as being behind the > times, dumb, or without care is just negligent. We can do without you. > If you want to help, help, otherwise take your trolling elsewhere. > Write your own crappy zfs implementation, I don''t care, go away. > > James > _______________________________________________ > xen-discuss mailing list > xen-discuss@opensolaris.org
Alright Mark. I already knew Sun liked it, they make Sun Studio and Java first class citizens and have Linux support for their hardware. I do use Linux and Solaris myself with about the same amount of problems, each has its benefits and cons, one or the other is better suited, and sometimes they work well together, such as in a network environment. I just thought it was mindless to attack the whole community and make assertions about Xen''s lack of features when it''s hardly as mature on Nevada. I''m sure the issues can be resolved through the proper channels. Oh, and I''m sorry for being abrasive guys, I hope we can just get along and strengthen the community and OpenSolaris by cooperating in a neutral way. James On Nov 29, 2007, at 4:19 AM, Mark Johnson wrote:> > Please take this discussion off alias. > > > The folks who work on xVM @ Sun like both Linux and > Solaris. A lot of use use both on a daily bases. > > > > > Thanks, > > MRJ > >
Martin wrote:> Please excluse that my comment is cross-posted to both the Xen and zfs discuss forums, I didn''t notice it on the Xen forum until I had posted. Anyway, here goes. Any info, testing, etc please contact me. > > > Regarding the following that I also hit, see > http://www.opensolaris.org/jive/thread.jspa?messageID=180995 > and if any further details or tests are required, I would be happy to assist.> I set /etc/system''s zfs:zfs_arc_max = 0x10000000 and it seems better now. > > I had previously tried setting it to 2Gb rather than 256Mb as above without success... I should have tried much lower! > > It "seems" that when I perform I/O though a WindowsXP hvm, I get a "reasonable" I/O rate, but I''m not sure at this point in time. When a write is made from within the hvm VM, would I expect for the same DMA issue to arise? (I can''t really tell either way aty the moment because it''s not super fast anyway) A couple of things here.. I have found that ~ 2G is good for a dom0 if you use zfs. If your using zfs in dom0, you should use a dom0_mem entry in grub''s menu.lst and fix the amount of memory dom0 starts with. e.g. my entry looks like... title 64-bit dom0 root (hd0,0,d) kernel$ /boot/$ISADIR/xen.gz dom0_mem=2048M com1=9600,8n1 console=com1 module$ /platform/i86xpv/kernel/$ISADIR/unix /platform/i86xpv/kernel/$ISADIR/unix -k module$ /platform/i86pc/$ISADIR/boot_archive Then, make sure you don''t auto-balloon dom0 down (by creating guests which would take some of that memory from dom0). The best way to do this is to set config/dom0-min-mem in xvm/xend (e.g. svcprop xvm/xend). zfs and auto-ballooning down don''t seem to work great together (I haven''t done a lot of testing to characterize this though). If you do want to auto-balloon, or want to use less memory in a zfs based dom0, setting zfs_arc_max to a low value seems to work well.. I/O performance in a Windows HVM guests is not good at this point. It won''t be until we have Windows PV drivers available. Thanks, MRJ
On 29/11/2007, at 4:34 AM, Mike Dotson wrote:> > On Wed, 2007-11-28 at 20:38 +0700, K wrote: >> 1/ Anchor vNic, the equivalent of linux dummy interfaces, we need >> more >> flexibility in the way we setup xen networking. What is sad is that >> the code is already available in the unreleased crossbow bits... but >> it won''t appear in nevada until Q1 2008 :( >> >> This is a real blocker for me as my ISP just started implementing >> port >> security and locks my connection everytime it sees a foreign mac >> address using one of the IP addresses that were originally assigned >> to >> my dom0. On linux, I can setup a dummy interface and create a bridge >> with it for a domU but on Solaris I need a physical NIC per bridge ! >> $!! >> @#$! > > Not sure if this would be feasible for you but look at man pages for > ifconfig, in particular the "ether" option....It is only possible using anchor vnics. Nicolas Roux from the cross-bow project replied this:> We are doing a merge of Crossbow with a more recent build of Nevada, > so we should have new "pre-release" archives which include both > Crossbow and Xen in around mid-December. I should be able to > integrate the anchor VNIC functionality by then. The distribution > will be through a pre-integration bfu archive until Crossbow putback > next year.
On Nov 28, 2007, at 5:38 AM, K wrote:> > 1/ Anchor vNic, the equivalent of linux dummy interfaces, we need more > flexibility in the way we setup xen networking. What is sad is that > the code is already available in the unreleased crossbow bits... but > it won''t appear in nevada until Q1 2008 :(Indeed ... a frustration for many, including myself, who need specific pieces of functionality from larger projects that have larger schedules. We''ve been using a "dummy" NIC driver for the development of the Virtual Router project which should be coming online soon. I''m in the process of getting a legal approval for making the driver available sooner ... It''s based on afe, which is already in the OpenSolaris repository, so I don''t see any problems with making it available, but I need to check first. The intent of the driver, of course, is to bridge the gap until Crossbow Anchor VNICs appear in Nevada, so any long term dependency on the driver should be discouraged, but having to allocate hardware NICs for virtual interfaces in the mean time is certainly a more substantial discouragement. Kev> This is a real blocker for me as my ISP just started implementing port > security and locks my connection everytime it sees a foreign mac > address using one of the IP addresses that were originally assigned to > my dom0. On linux, I can setup a dummy interface and create a bridge > with it for a domU but on Solaris I need a physical NIC per bridge ! > $!! > @#$! > > For this particular feature, I am ready to give a few hundred dollars > as booty if anyone has a workaround. > > 2/ Pci passthru, this is really useful so you can let a domU access a > PCI card. It comes really handy if you want to virtualize a PBX that > is using cheap zaptel FXO cards. Again on linux, xen pci passthru has > been available for a while. Last time I mention this on the xen > solaris discussion, I received a very dry reply. > > 3/ Problem with DMA under Xen ... e.g. my areca raid cards works > perfect on a 8GB box without xen but because of the way xen allocates > memory... I am forced to allocate only 1 or 2 gig for the dom0 or the > areca drivers will fail miserably trying to do DMA above the first 4G > address space. This very same problem affected xen under linux over a > year ago and seems to have been addressed. Several persons on the ZFS > discuss list who complain about poor ZFS IO performance are affected > by this issue. > > 4/ Poor exploit mitigation under Solaris. In comparaison, OpenBSD, > grsec linux and Windows => XP SP2 have really good exploit > mitigation.... It is a shame because solaris offered a non-exec stack > before nearly everyone else... but it stopped there... no heap > protection, etc... > > The only thing that is preventing me from switching back to linux (no > zfs), freebsd (no xen) or openbsd (no xen and no zfs), right now is > ZFS and it is the same reason I switched to Solaris in the first > place. > > > > _______________________________________________ > xen-discuss mailing list > xen-discuss@opensolaris.org
On 29/11/2007, at 6:49 PM, Martin wrote:> Please excluse that my comment is cross-posted to both the Xen and > zfs discuss forums, I didn''t notice it on the Xen forum until I had > posted. Anyway, here goes. Any info, testing, etc please contact me. > > > Regarding the following that I also hit, see > http://www.opensolaris.org/jive/thread.jspa?messageID=180995 > and if any further details or tests are required, I would be happy > to assist. > >> 3/ Problem with DMA under Xen ... e.g. my areca raid cards works >> perfect on a 8GB box without xen but because of the way xen allocates >> memory... I am forced to allocate only 1 or 2 gig for the dom0 or the >> areca drivers will fail miserably trying to do DMA above the first 4G >> address space. This very same problem affected xen under linux over a >> year ago and seems to have been addressed. Several persons on the ZFS >> discuss list who complain about poor ZFS IO performance are affected >> by this issue. > > This should be relatively easy to fix assuming I can get > access to similar H/W. > > Do you get any error messages? We do have a bug in contig alloc > (allocs too much memory) which was recently found which is > affecting nv_sata based systems. It may be related to that > or something that the driver could be doing better. > > Can you send me more details around your setup (card your > using, what''s connected to it, where you got the driver > and what version you have), behavior and perf on metal, > behavior and perf on xVM.The workaround is to limit the dom0 memory to 1024M, reserving 7 gig for my domU''s. kernel$ /boot/$ISADIR/xen.gz dom0_mem=1024M com1=57600,8n1 console=com1 Here are the relevant threads. I posted with the logs and errors in some of them: http://www.opensolaris.org/jive/thread.jspa?messageID=162481𧪱 http://www.opensolaris.org/jive/thread.jspa?messageID=162462𧪞 http://www.opensolaris.org/jive/thread.jspa?messageID=162459𧪛 and recently Martints had the same problem: http://www.opensolaris.org/jive/thread.jspa?threadID=43518&start=0&tstart=0
> The workaround is to limit the dom0 memory to 1024M, reserving 7 gig > for my domU''s. > > kernel$ /boot/$ISADIR/xen.gz dom0_mem=1024M com1=57600,8n1 console=com1...> and recently Martints had the same problem: > > http://www.opensolaris.org/jive/thread.jspa?threadID=43518&start=0&tstart=0Hmm, but I had the impression that Martints was using different hardware (most likely Intel ICH S-ATA (?) controller). AFAICT ,Martints didn''t mention the Areca hardware... This message posted from opensolaris.org
(it''s me in the reply above too) And I''m using Intel ich sata in the default "cmdk" mode. This message posted from opensolaris.org
This does not fix t for me with the Dom0 memory limited to 1Gb. I also tried /etc/system with set zfs:zfs_arc_max = 0x20000000 (512Mb) then it''s OK, but it it''s twice that (1Gb), or larger, then I get the problem. So, I''m now running with the above zfs_arc_max, plus svccfg -s xvm/xend setprop config/dom0-min-mem=2048 as suggested by Mark elsewhere. At least I''ve been able to put the memory DIMMs back in the machine today! This message posted from opensolaris.org
> (it''s me in the reply above too) > > And I''m using Intel ich sata in the default "cmdk" mode.AFAIK, "ata"/"cmdk" can''t access memory >= 4G for dma, so it must use bounce buffers. Could this be a reason why the system is slow with 4G of physical memory installed (where a small amount ~0.5G of physical memory is remapped >= 4G); or with 8G of physical memory installed (where ~4.5G of memory would be >= 4G)? Did you ever try to use the Intel S-ATA controller in AHCI mode? A quick look at the Solaris ahci driver seem to indicate that *some* variants of Intel AHCI S-ATA are able to use 64-bit DMA addresses. This message posted from opensolaris.org
Excellent. I''ve changed from "cmdk" (where format reports device paths like .../ide@0/cmdl@0,0) to the AHCI mode (where format reports device paths like in /disk@1,0) and all is much better... in fact around ten times better! I did notice two little issues just for information, (a) the default disk ordering of my first two disks swapped ... eg my old root disk, which was the "1st" became the "2nd", and my old "2nd" disk because the first. In the event, this wasn''t a problem, but may catch the unwary. (b) When I jumpstarted a new install on the now "1st" disk (ie the was that had been the "2nd" one under the ATA/IDE/cmdk mode), the jumpstart scripts failed to initialize it, and so did a "fdisk -B /blah/blah. I needed to "dd" /dev/zero over the first cylinder or so, and then I could "fdisk -B", followed by a new successful jumpstart. I''m not sure if this was to do with the length being reported differently between the cmdk/"ahci". This message posted from opensolaris.org