Another area of problems is that of using network storage. In my case, I''m using legacy 1GB and 2GB fibre channel storage units connected into fc filers that hand off NFS/CIFS. It''s a nice, simple setup but in trying various combinations, things are still very slow and sluggish. I''ve tried various things such as having an NFS or direct FC share onto each VM server, then installing the guest onto the network storage. Works fine but when you start adding servers, things get a bit difficult. The servers run just fine, nice and fast but, the gotcha so far, seems to be when copying files across that storage for any of the servers. So for example, when copying say large backup files, GB sized, that slows down everything way too much. So I''m wondering how others deal with network based storage when it comes to virtualization of this type. Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Jan 20, 2009 at 11:53 PM, lists@grounded.net <lists@grounded.net> wrote:> Another area of problems is that of using network storage. > > In my case, I''m using legacy 1GB and 2GB fibre channel storage units connected into fc filers that hand off NFS/CIFS. It''s a nice, simple setup but in trying various combinations, things are still very slow and sluggish. > > I''ve tried various things such as having an NFS or direct FC share onto each VM server, then installing the guest onto the network storage. Works fine but when you start adding servers, things get a bit difficult.The way I see it, when it comes to storage allocation, it''s probably best (in terms of balance between performance and managability) to treat xen domUs like any other physical server. You allocate a different LUNs for each domU. Other than giving higher I/O performance, this method has the added benefit that it allows easy converting between domU <-> real servers.> The servers run just fine, nice and fast but, the gotcha so far, seems to be when copying files across that storage for any of the servers. So for example, when copying say large backup files, GB sized, that slows down everything way too much. >Yeah, I know. For a long time we used to centralize every storage on 1 & 2 Gbps FC SAN. This works fine for the most part, until we put I/O-hungry applications on it. Oracle database was competing for I/O with web servers, making performance of both suffer greatly. In the end we find that local storage actually provides MUCH higher I/O throughput, since it has "dedicated" disks with plenty of I/O bandwitdh :p So bottom line, my suggestions are : - treat domU''s storage like real server''s storage - usual I/O optimizations apply : more disks for more throughput, have lots of available BW, dedicated storage when possible, etc. As a side note, if you''re already familiar with FC filers, you might want to try SUN''s new Unified Storage or even a simple OpenSolaris-based NAS. You can then use iscsi-exported zfs-volumes which gives you features like : - snapshot and clones (can save space significantly, plus making things like backup a lot easier) - compression (also a space-saver, and might even increase I/O throughput in certain conditions) - chekcsums and raidz to ensure data integrity Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> treat xen domUs like any other physical server. You allocate a > different LUNs for each domU. Other than giving higher I/O > performance, this method has the added benefit that it allows easy > converting between domU <-> real servers.Do you mean something like having a SCSI chassis attached to the VM server, then exporting LUNs to each guest? Does this not defeat the idea of virtualizing the guests? Obviously, I am missing something. I''m not sure I want to get into messing with figuring out how to give guests direct access to say an FC adapter in the VM server but perhaps it''s just as easy. If so, then each server could easily have direct attached access to it''s shared storage such as GFS. A virtual GFS cluster sounds fun.> In the end we find > that local storage actually provides MUCH higher I/O throughput, since > it has "dedicated" disks with plenty of I/O bandwitdh :pOk, so you are just talking about local storage for the VM server, not so much the individual guests.> As a side note, if you''re already familiar with FC filers, you might > want to try SUN''s new Unified Storage or even a simpleWe''re just a small house and don''t have a big budget, especially in this economy. That said, I have put together some pretty good hardware. I use a couple of smaller BlueArc filers to export NFS/CIFS shares and have a couple of BlueArc TITAN filers for when I need very fast I/O. I am also experimenting with building my own storage devices at this point. For example, I''ve just put together a CentOS server with 4TB of storage using parted and will be looking for leads on how to optimize I/O from such a machine which will do nothing but server NFS shares. Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Jan 22, 2009 at 1:50 AM, lists@grounded.net <lists@grounded.net> wrote:>> treat xen domUs like any other physical server. You allocate a >> different LUNs for each domU.> Do you mean something like having a SCSI chassis attached to the VM server, then exporting LUNs to each guest?No. What I mean is : - having a SAN/NAS that has direct access to disks - having the SAN export LUNs to dom0. One LUN for each domU - dom0s connected to SAN via either FC or iscsi - have all dom0s import all domU''s LUNs> >> In the end we find >> that local storage actually provides MUCH higher I/O throughput, since >> it has "dedicated" disks with plenty of I/O bandwitdh :p > > Ok, so you are just talking about local storage for the VM server, not so much the individual guests.Actually my point is if your primary concern is performance, it maybe useful to simply use local disks instead of SAN. You''ll lose live migration abilites, but depending on the situation, it might be useful.> >> As a side note, if you''re already familiar with FC filers, you might >> want to try SUN''s new Unified Storage > > We''re just a small house and don''t have a big budget, especially in this economy. > That said, I have put together some pretty good hardware. I use a couple of smaller BlueArc filers to export NFS/CIFS shares and have a couple of BlueArc TITAN filers for when I need very fast I/O. > I am also experimenting with building my own storage devices at this point. For example, I''ve just put together a CentOS server with 4TB of storage using parted and will be looking for leads on how to optimize I/O from such a machine which will do nothing but server NFS shares.You might want to try experimenting to share iscsi from that server as well. iscsi can outperform NFS for certain requirements (for example: domU''s storage). Once you''re familiar with using iscsi, then you can try using opensolaris which AFAIK should make some jobs easier. SUN''s fishwork storage is based on opensolaris, but it has performance enhancements (for example, the use of SSD) and a nice GUI. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> You might want to try experimenting to share iscsi from that server as > well. iscsi can outperform NFS for certain requirements (for example: > domU''s storage). Once you''re familiar with using iscsi, then you canI''ve used iscsi on win machines as clients to linux boxes but haven''t tried that protocol with guests yet. I''ll keep this in mind as something to try also. Thanks. Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users