I''ve installed debian lenny amd64, it is frozen now. I''ve install kernel for xen support but it doesn''t start. It says "you need to load kernel first" but I''ve installed all the packages concerning xen, also packages related to the kernel. Perhaps lenny doesn''t support xen anymore? Any solution? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Mauro, Am Donnerstag, den 20.11.2008, 13:03 +0100 schrieb Mauro:> I''ve installed debian lenny amd64, it is frozen now. > I''ve install kernel for xen support but it doesn''t start.Im running lenny dom0 # uname -r 2.6.26-1-xen-amd64 # xm info | grep xen release : 2.6.26-1-xen-amd64 xen_major : 3 xen_minor : 2 xen_extra : -1 xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_scheduler : credit xen_pagesize : 4096 xen_changeset : unavailable xend_config_format : 4 I installed with the following command: # apt-get install xen-linux-system-2.6.26-1-xen-amd64 xen-utils-3.2-1 lvm2 xen-tools python-xml> It says "you need to load kernel first" but I''ve installed all the > packages concerning xen, also packages related to the kernel. > Perhaps lenny doesn''t support xen anymore? > Any solution?could you please post dpkg-l | grep xen and the xen-boot-entry in menu.lst Regards, Thomas> _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Mauro, Am Donnerstag, den 20.11.2008, 13:25 +0100 schrieb Mauro:> .... > > It says "you need to load kernel first" but I''ve installed > all the > > packages concerning xen, also packages related to the > kernel. > > Perhaps lenny doesn''t support xen anymore? > > Any solution? > > > could you please post > > dpkg-l | grep xen > > ii libxenstore3.0 3.2.1-2 Xenstore > communications library for Xen > ii linux-image-2.6.26-1-xen-amd64 2.6.26-8 Linux > 2.6.26 image on AMD64 > ii linux-modules-2.6.26-1-xen-amd64 2.6.26-8 Linux > 2.6.26 modules on AMD64 > ii xen-hypervisor-3.2-1-amd64 3.2.1-2 The Xen > Hypervisor on AMD64 > ii xen-linux-system-2.6.26-1-xen-amd64 2.6.26-8 XEN > system with Linux 2.6.26 image on AMD64 > ii xen-shell 1.8-3 Console > based Xen administration utility > ii xen-tools 3.9-4 Tools to > manage Debian XEN virtual servers > ii xen-utils-3.2-1 3.2.1-2 XEN > administrative tools > ii xen-utils-common 3.2.0-2 XEN > administrative tools - common files > ii xenstore-utils > > > and the xen-boot-entry in menu.lst > > I''m using grub2, I don''t have a menu.lst anymore.But a grub.cfg, which would be interesting :)> > Thank you.Regards, Thomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Mauro, Am Donnerstag, den 20.11.2008, 13:40 +0100 schrieb Mauro:> > and the xen-boot-entry in menu.lst > > > > I''m using grub2, I don''t have a menu.lst anymore. > > > But a grub.cfg, which would be interesting :) > > Here it is: > > > set default=0 > set timeout=5 > set root=(hd0,1) > search --fs-uuid --set 1730e593-bc15-492b-a4ae-190d3eb7abaf > if font /usr/share/grub/ascii.pff ; then > set gfxmode=640x480 > insmod gfxterm > insmod vbe > terminal gfxterm > fi > ### END /etc/grub.d/00_header ### > > ### BEGIN /etc/grub.d/05_debian_theme ### > set menu_color_normal=cyan/blue > set menu_color_highlight=white/blue > > set root=(hd0,1) > search --fs-uuid --set 1730e593-bc15-492b-a4ae-190d3eb7abaf > menuentry "Debian GNU/Linux, linux 2.6.26-1-xen-amd64" { > linux /boot/vmlinuz-2.6.26-1-xen-amd64 > root=UUID=1730e593-bc15-492b-a4ae-190d3eb7abaf ro > initrd /boot/initrd.img-2.6.26-1-xen-amd64 > } > menuentry "Debian GNU/Linux, linux 2.6.26-1-xen-amd64 (single-user > mode)" { > linux /boot/vmlinuz-2.6.26-1-xen-amd64 > root=UUID=1730e593-bc15-492b-a4ae-190d3eb7abaf ro single > initrd /boot/initrd.img-2.6.26-1-xen-amd64 > }....As you can see, there''s no xen-entry at all. Adding something like this menuentry “Xen 3.2″ { multiboot (hd0,1)/boot/xen-3.2-1-amd64.gz dom0_mem=256M module (hd0,1)/boot/vmlinuz-2.6.26-1-xen-amd64 root=UUID=1730e593-bc15-492b-a4ae-190d3eb7abaf ro module (hd0,1)/boot/initrd.img-2.6.26-1-xen-amd64 } should solve the problem for you.> thank you.Please answer to the List and not PM Regards, Thomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > > Adding something like this > > menuentry "Xen 3.2″ { > multiboot (hd0,1)/boot/xen-3.2-1-amd64.gz dom0_mem=256M > module (hd0,1)/boot/vmlinuz-2.6.26-1-xen-amd64 > root=UUID=1730e593-bc15-492b-a4ae-190d3eb7abaf ro > module (hd0,1)/boot/initrd.img-2.6.26-1-xen-amd64 > } > > should solve the problem for you.Ok, great!!! So it is a grub2 bug. Thank you. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2008/11/20 Thomas Halinka <lists@thohal.de>> Hi Mauro, > > Am Donnerstag, den 20.11.2008, 13:03 +0100 schrieb Mauro: > > I''ve installed debian lenny amd64, it is frozen now. > > I''ve install kernel for xen support but it doesn''t start. > > Im running lenny dom0Do you use lenny with xen in production environments? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Mauro wrote:>I''ve installed debian lenny amd64, it is frozen now. >I''ve install kernel for xen support but it doesn''t start. >It says "you need to load kernel first" but I''ve installed all the >packages concerning xen, also packages related to the kernel. >Perhaps lenny doesn''t support xen anymore?I found that the Xen kernel with Lenny doesn''t work as a Dom0 kernel - try Google ! I installed the linux-image-2.6.18-6-xen-686 kernel (and modules) which from vague memory was by adding the Etch repositories temporarily to apt''s config and installing the specific version. Checking, I see that all my guests are also running 2.6.16-5 or -6, largely because "it works" and I was short of time for experimentation. -- Simon Hobson Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed author Gladys Hobson. Novels - poetry - short stories - ideal as Christmas stocking fillers. Some available as e-books. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Again, Am Freitag, den 21.11.2008, 10:16 +0100 schrieb Mauro:> > > 2008/11/20 Thomas Halinka <lists@thohal.de> > Hi Mauro, > > Am Donnerstag, den 20.11.2008, 13:03 +0100 schrieb Mauro: > > I''ve installed debian lenny amd64, it is frozen now. > > I''ve install kernel for xen support but it doesn''t start. > > > Im running lenny dom0 > > Do you use lenny with xen in production environments?Yep - im running a XEN-Cluster on debian lenny with about 60 domUs on 8 Cluster-Nodes, with Live-Migration, HA-Failover and LiveBackup through lvm-snapshotting. Regards, Thomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > Yep - im running a XEN-Cluster on debian lenny with about 60 domUs on 8 > Cluster-Nodes, with Live-Migration, HA-Failover and LiveBackup through > lvm-snapshotting. >Great!!! :-) Thank you. Mauro _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2008/11/21 Mauro <mrsanna1@gmail.com>> Yep - im running a XEN-Cluster on debian lenny with about 60 domUs on 8 >> Cluster-Nodes, with Live-Migration, HA-Failover and LiveBackup through >> lvm-snapshotting. >> > > Great!!! :-) > Thank you. >I''ve noticed that in xen-tools the amount of memory can''t be specified as Gb, but only in Mb. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Nov 21, 2008 at 11:47:35AM +0100, Thomas Halinka wrote:> Hi Again, > > Am Freitag, den 21.11.2008, 10:16 +0100 schrieb Mauro: > > > > > > 2008/11/20 Thomas Halinka <lists@thohal.de> > > Hi Mauro, > > > > Am Donnerstag, den 20.11.2008, 13:03 +0100 schrieb Mauro: > > > I''ve installed debian lenny amd64, it is frozen now. > > > I''ve install kernel for xen support but it doesn''t start. > > > > > > Im running lenny dom0 > > > > Do you use lenny with xen in production environments? > > Yep - im running a XEN-Cluster on debian lenny with about 60 domUs on 8 > Cluster-Nodes, with Live-Migration, HA-Failover and LiveBackup through > lvm-snapshotting. >This is interesting. Want to tell more about your setup? CLVM? iSCSI? What kind of physical server hardware? What kind of storage? What exact kernel and Xen versions? Thanks! -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Mauro wrote: > >I''ve installed debian lenny amd64, it is frozen now. > >I''ve install kernel for xen support but it doesn''t start. > >It says "you need to load kernel first" but I''ve installed all the > >packages concerning xen, also packages related to the kernel. > >Perhaps lenny doesn''t support xen anymore? > > I found that the Xen kernel with Lenny doesn''t work as a Dom0 kernel > - try Google ! > > I installed the linux-image-2.6.18-6-xen-686 kernel (and modules) > which from vague memory was by adding the Etch repositories > temporarily to apt''s config and installing the specific version. > Checking, I see that all my guests are also running 2.6.16-5 or -6, > largely because "it works" and I was short of time for > experimentation.Have a look at the ''Snapshots'' section of http://wiki.debian.org/DebianKernel I''m using the amd64 kernel from http://kernel-archive.buildserver.net/debian-kernel/pool/main/l/linux-2. 6.18-xen-3.3/ but apparently the 2.6.26 xen kernels under http://kernel-archive.buildserver.net/debian-kernel/pool/main/l/linux-2. 6/ work also. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2008/11/21 Simon Hobson <linux@thehobsons.co.uk>> Mauro wrote: > >> I''ve installed debian lenny amd64, it is frozen now. >> I''ve install kernel for xen support but it doesn''t start. >> It says "you need to load kernel first" but I''ve installed all the >> packages concerning xen, also packages related to the kernel. >> Perhaps lenny doesn''t support xen anymore? >> > > I found that the Xen kernel with Lenny doesn''t work as a Dom0 kernel - try > Google !Sorry I don''t understand. I''m running xen kernel with lenny in my dom0. It works perfectly. Confirmations? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi all, I''m pretty interesting study this kind of solution for my office. Following questions from Pasi, i would like to know from you, Thomas, if you are using a SAN for your cluster. If so, what kind of data access technologies you use with. Last question, how do you manage HA, Live migration and snapshots : owned scripts ? Thanks a lot for any response from yourself. Nicolas. On Fri, Nov 21, 2008 at 7:57 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:> On Fri, Nov 21, 2008 at 11:47:35AM +0100, Thomas Halinka wrote: > > Hi Again, > > > > Am Freitag, den 21.11.2008, 10:16 +0100 schrieb Mauro: > > > > > > > > > 2008/11/20 Thomas Halinka <lists@thohal.de> > > > Hi Mauro, > > > > > > Am Donnerstag, den 20.11.2008, 13:03 +0100 schrieb Mauro: > > > > I''ve installed debian lenny amd64, it is frozen now. > > > > I''ve install kernel for xen support but it doesn''t start. > > > > > > > > > Im running lenny dom0 > > > > > > Do you use lenny with xen in production environments? > > > > Yep - im running a XEN-Cluster on debian lenny with about 60 domUs on 8 > > Cluster-Nodes, with Live-Migration, HA-Failover and LiveBackup through > > lvm-snapshotting. > > > > This is interesting. Want to tell more about your setup? CLVM? iSCSI? > > What kind of physical server hardware? What kind of storage? > > What exact kernel and Xen versions? > > Thanks! > > -- Pasi > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am Samstag, den 22.11.2008, 20:19 +0400 schrieb Nicolas Ruiz:> Hi all, > > I''m pretty interesting study this kind of solution for my office. > Following questions from Pasi, i would like to know from you, Thomas, > if you are using a SAN for your cluster.i build up my own SAN with mdadm, lvm und vblade.> If so, what kind of data access technologies you use with.ATAoverEthernet, which sends ATA-commands over Ethernet (Layer2). It''s something like SAN over Ethernet and much faster than iscsi, since no tcp/ip is used. also failover was very tricky with iscsi....> > Last question, how do you manage HA, Live migration and snapshots : > owned scripts ?heartbeat2 with crm and constraints and the rest is managed through openqrm.> > Thanks a lot for any response from yourself. > > Nicolas. > > On Fri, Nov 21, 2008 at 7:57 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote: > On Fri, Nov 21, 2008 at 11:47:35AM +0100, Thomas Halinka > wrote: > > Hi Again, > > > > Am Freitag, den 21.11.2008, 10:16 +0100 schrieb Mauro: > > > > > > > > > 2008/11/20 Thomas Halinka <lists@thohal.de> > > > Hi Mauro, > > > > > > Am Donnerstag, den 20.11.2008, 13:03 +0100 schrieb > Mauro: > > > > I''ve installed debian lenny amd64, it is frozen > now. > > > > I''ve install kernel for xen support but it > doesn''t start. > > > > > > > > > Im running lenny dom0 > > > > > > Do you use lenny with xen in production environments? > > > > Yep - im running a XEN-Cluster on debian lenny with about 60 > domUs on 8 > > Cluster-Nodes, with Live-Migration, HA-Failover and > LiveBackup through > > lvm-snapshotting. > > > > > This is interesting. Want to tell more about your setup? CLVM? > iSCSI?nope, just AoE and LVM> > What kind of physical server hardware? What kind of storage?It s self-build. We had evaluated FC-SAN-Solutions, but they were slow, unflexible and very expensive. We ''re using Standard-Server with bonding over 10Gbit-NICs This setup transfers 1300 MB/s at the moment, is highly scaleable and was about 70% cheaper than a FC-Solution.> > What exact kernel and Xen versions?at the moment its xen 3.2 and 2.6.18-Kernel. I am evaluating 3.3 and 2.6.26 atm.> > Thanks! > > -- PasiIf interested in this Setup, i could get you a overview with a small abstract, what is managed where and why... you know ;) Thomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Nov 27, 2008 at 11:10 AM, Thomas Halinka <lists@thohal.de> wrote:> Am Samstag, den 22.11.2008, 20:19 +0400 schrieb Nicolas Ruiz: >> This is interesting. Want to tell more about your setup? CLVM? >> iSCSI? > > nope, just AoE and LVMLVM and not CLVM? i guess that means that you use LVM at the storage boxes and export (with vblade) the LVs? -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Javier, Am Donnerstag, den 27.11.2008, 11:52 -0500 schrieb Javier Guerra:> On Thu, Nov 27, 2008 at 11:10 AM, Thomas Halinka <lists@thohal.de> wrote: > > Am Samstag, den 22.11.2008, 20:19 +0400 schrieb Nicolas Ruiz: > >> This is interesting. Want to tell more about your setup? CLVM? > >> iSCSI? > > > > nope, just AoE and LVM > > LVM and not CLVM? i guess that means that you use LVM at the storage > boxes and export (with vblade) the LVs?My Storage-Header-Box (which is also the openqrm-server) has sw-raid over a bunch of intel-servers with vblade. So i created some mdX and put them in a VG. /dev/md0 ---> /dev/md19 are my AoE-Backend and r PVs in a VG. On this Storage-Header im exporting the Lvols through AoE again. So my XEN-Boxes grab their LVols.... Sounds confusing, but maybe a drawing would show up the concept Thomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thomas Halinka schrieb:> It s self-build. We had evaluated FC-SAN-Solutions, but they were slow, > unflexible and very expensive. We ''re using Standard-Server with bonding > over 10Gbit-NICs > > This setup transfers 1300 MB/s at the moment, is highly scaleable and > was about 70% cheaper than a FC-Solution.Hi Thomas, just out of curiousity, how did you measured the throughput of 1300MB(megabytes) per second? What did this value describe? Stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Nov 27, 2008 at 05:10:30PM +0100, Thomas Halinka wrote:> Am Samstag, den 22.11.2008, 20:19 +0400 schrieb Nicolas Ruiz: > > Hi all, > > > > I''m pretty interesting study this kind of solution for my office. > > Following questions from Pasi, i would like to know from you, Thomas, > > if you are using a SAN for your cluster. > > i build up my own SAN with mdadm, lvm und vblade. > > > If so, what kind of data access technologies you use with. > > ATAoverEthernet, which sends ATA-commands over Ethernet (Layer2). It''s > something like SAN over Ethernet and much faster than iscsi, since no > tcp/ip is used. also failover was very tricky with iscsi.... >OK.> > > > > Last question, how do you manage HA, Live migration and snapshots : > > owned scripts ? > > heartbeat2 with crm and constraints and the rest is managed through > openqrm. >Hmm.. so openqrm can take care of locking domU disks in dom0''s for live migration? ie. making sure only single dom0 accesses domU disks at a time..> > > > This is interesting. Want to tell more about your setup? CLVM? > > iSCSI? > > nope, just AoE and LVM > > > > > What kind of physical server hardware? What kind of storage? > > It s self-build. We had evaluated FC-SAN-Solutions, but they were slow, > unflexible and very expensive. We ''re using Standard-Server with bonding > over 10Gbit-NICs > > This setup transfers 1300 MB/s at the moment, is highly scaleable and > was about 70% cheaper than a FC-Solution. >Ok.> > > > What exact kernel and Xen versions? > > at the moment its xen 3.2 and 2.6.18-Kernel. I am evaluating 3.3 and > 2.6.26 atm. > > > > > Thanks! > > > > -- Pasi > > > If interested in this Setup, i could get you a overview with a small > abstract, what is managed where and why... you know ;) >Yeah.. picture would be nice :) And thanks for the answer! -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Nov 27, 2008 at 06:05:27PM +0100, Thomas Halinka wrote:> Hi Javier, > > Am Donnerstag, den 27.11.2008, 11:52 -0500 schrieb Javier Guerra: > > On Thu, Nov 27, 2008 at 11:10 AM, Thomas Halinka <lists@thohal.de> wrote: > > > Am Samstag, den 22.11.2008, 20:19 +0400 schrieb Nicolas Ruiz: > > >> This is interesting. Want to tell more about your setup? CLVM? > > >> iSCSI? > > > > > > nope, just AoE and LVM > > > > LVM and not CLVM? i guess that means that you use LVM at the storage > > boxes and export (with vblade) the LVs? > > My Storage-Header-Box (which is also the openqrm-server) has sw-raid > over a bunch of intel-servers with vblade. > > So i created some mdX and put them in a VG. > > /dev/md0 ---> /dev/md19 are my AoE-Backend and r PVs in a VG. > > On this Storage-Header im exporting the Lvols through AoE again. So my > XEN-Boxes grab their LVols.... > > Sounds confusing, but maybe a drawing would show up the concept >Do you run LVM on dom0''s for domU disks? How many ''SAN disks'' each dom0 sees? -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ola Stefan, hope you got your vt-d-stuff running. Am Donnerstag, den 27.11.2008, 19:57 +0100 schrieb Stefan Bauer:> Thomas Halinka schrieb: > > It s self-build. We had evaluated FC-SAN-Solutions, but they were slow, > > unflexible and very expensive. We ''re using Standard-Server with bonding > > over 10Gbit-NICs > > > > This setup transfers 1300 MB/s at the moment, is highly scaleable and > > was about 70% cheaper than a FC-Solution. > > Hi Thomas, > > just out of curiousity, how did you measured the throughput of > 1300MB(megabytes) per second?Started "dd if=/dev/zero of=/path/to/aoe/disk bs=1M count=100000" on each dom0 at the same time and accumulated performance of each dd-process. All the 8 dd-processes gave me a total throughput 1386 MB/s in the sum. Yeah - i know, that dd is not really representative for every application, but in my case the setup is built for scientific data processing and so it is representative in my case.> What did this value describe?The Amount of written Data the Storage-Header processed each second - as said it s scalable in any way. Just wait for the drawing and you will understand why ;)> > StefanRegards, Thomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thomas Halinka schrieb:> hope you got your vt-d-stuff running.Unfortunately not yet. I''m still awating response from the lenovo guys as there is some bios bug in my case.> Yeah - i know, that dd is not really representative for every > application, but in my case the setup is built for scientific data > processing and so it is representative in my case. > >> What did this value describe? > > The Amount of written Data the Storage-Header processed each second - as > said it s scalable in any way. Just wait for the drawing and you will > understand why ;)Hopefully your drawing will enlighten the setup. Stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hello Stefan, Am Donnerstag, den 27.11.2008, 22:37 +0100 schrieb Stefan Bauer:> Thomas Halinka schrieb: > > hope you got your vt-d-stuff running. > Unfortunately not yet. I''m still awating response from the lenovo guys > as there is some bios bug in my case.hmm, shit - my bios is running fine. which model do you own exactly? my bios is from 08/20/08 and working.> > > Yeah - i know, that dd is not really representative for every > > application, but in my case the setup is built for scientific data > > processing and so it is representative in my case. > > > >> What did this value describe? > > > > The Amount of written Data the Storage-Header processed each second - as > > said it s scalable in any way. Just wait for the drawing and you will > > understand why ;) > > Hopefully your drawing will enlighten the setup.i am not an artist, but i will do my best :)> > StefanThomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thomas Halinka schrieb:> hmm, shit - my bios is running fine. which model do you own exactly? > > my bios is from 08/20/08 and working.the plate on the front shows: MT-M 6075-BQG -- stefan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am Donnerstag, den 27.11.2008, 23:08 +0100 schrieb Stefan Bauer:> Thomas Halinka schrieb: > > hmm, shit - my bios is running fine. which model do you own exactly? > > > > my bios is from 08/20/08 and working. > > the plate on the front shows: > > MT-M 6075-BQGhmm, i got 9194-A1G, which works really nice. Thomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Pasi, Am Donnerstag, den 27.11.2008, 21:25 +0200 schrieb Pasi Kärkkäinen:> On Thu, Nov 27, 2008 at 05:10:30PM +0100, Thomas Halinka wrote: > > Am Samstag, den 22.11.2008, 20:19 +0400 schrieb Nicolas Ruiz: > > > Hi all, > > > > > > I''m pretty interesting study this kind of solution for my office. > > > Following questions from Pasi, i would like to know from you, Thomas, > > > if you are using a SAN for your cluster. > > > > i build up my own SAN with mdadm, lvm und vblade. > > > > > If so, what kind of data access technologies you use with. > > > > ATAoverEthernet, which sends ATA-commands over Ethernet (Layer2). It''s > > something like SAN over Ethernet and much faster than iscsi, since no > > tcp/ip is used. also failover was very tricky with iscsi.... > > > > OK. > > > > > > > > > Last question, how do you manage HA, Live migration and snapshots : > > > owned scripts ? > > > > heartbeat2 with crm and constraints and the rest is managed through > > openqrm. > > > > Hmm.. so openqrm can take care of locking domU disks in dom0''s for live > migration? ie. making sure only single dom0 accesses domU disks at a time.. > > > > > > > This is interesting. Want to tell more about your setup? CLVM? > > > iSCSI? > > > > nope, just AoE and LVM > > > > > > > > What kind of physical server hardware? What kind of storage? > > > > It s self-build. We had evaluated FC-SAN-Solutions, but they were slow, > > unflexible and very expensive. We ''re using Standard-Server with bonding > > over 10Gbit-NICs > > > > This setup transfers 1300 MB/s at the moment, is highly scaleable and > > was about 70% cheaper than a FC-Solution. > > > > Ok. > > > > > > > What exact kernel and Xen versions? > > > > at the moment its xen 3.2 and 2.6.18-Kernel. I am evaluating 3.3 and > > 2.6.26 atm. > > > > > > > > Thanks! > > > > > > -- Pasi > > > > > > If interested in this Setup, i could get you a overview with a small > > abstract, what is managed where and why... you know ;) > > > > Yeah.. picture would be nice :)you can get it here: http://openqrm.com/storage-cluster.png Some words to say: - openqrm-server has mdadm started and sees all mdX-Devices - openqrm-server knows a vg "data" with all mdX-Devices inside - openqrm-server exports "lvol" to the LAN - openqrm-server provides a boot-service (pxe), which: deploys a XEN-Image to xen_1-X and puts this ressource into a puppet-class in this xen-image is heartbeat2 with crm and constraints implemented. puppet only alters the config for me... Some explanations: - all the storage-boxes are standard-server with 4xGB-NICs, 24-SATA on Areca Raid6 (areca is impressive, since write-performance of raid 5 raid 6 = raid 0). Only small OS and the rest of HDD is exported through vblade. - header_a und b is heartbeat v1 cluster with drbd. drbd mirrors the data for openqrm and heartbeat does HA for openqrm - openqrm itself is the storage-header exporting all the data from the storage-boxes to the clients - openqrm-boot-service deploys a xen-image and puppet-configuration to this xen-servers. - all xen-server see all vblades and shelfes - xen-vms resist on aoe-blades, so snapshotting, lvextend, resize2fs is possible online Scalability: Storage: go buy 3 new server, put a bunch of harddisk inside, install linux, install vblade and fire them. on the openqrm-server you only have to create a new-md and extend the volume-group Performance: buy a new Server and let him pxe-boot, create a new appliance and watch your server rebooting, starting xen and participating the cluster. We Started the Cluster with about 110 GB-Storage - at the moment we have about 430 GB Data and have to extend up to 1,2 PB in Summer 2009, which is no problem. No - go and search for a SAN-Solution like this and ask for the price ;) http://www.storageperformance.org/results/benchmark_results_spc2 shows some-fc-solutions..... i guess that we will be in summer in this performance-regions with about 30 % of the costs and much more flebility. http://www.storageperformance.org/results/b00035_HP-XP24000_SPC2_full-disclosure.pdf Price: Total: $ 1,635,434> > And thanks for the answer!ay - cheers! i will end up this post with some words of Coraid CEO Kemp: "... We truly are a SAN solution, but SAN is not in the vocabulary of Linux people, because SAN is equated with fiber channel, and fiber channel is too expensive. But now, there''s ''poor man SAN" [1]> > -- PasiThomas Any Questions - ask me ;) [1] http://www.linuxdevices.com/news/NS3189760067.html _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am Donnerstag, den 27.11.2008, 21:27 +0200 schrieb Pasi Kärkkäinen:> On Thu, Nov 27, 2008 at 06:05:27PM +0100, Thomas Halinka wrote: > > Hi Javier, > > > > Am Donnerstag, den 27.11.2008, 11:52 -0500 schrieb Javier Guerra: > > > On Thu, Nov 27, 2008 at 11:10 AM, Thomas Halinka <lists@thohal.de> wrote: > > > > Am Samstag, den 22.11.2008, 20:19 +0400 schrieb Nicolas Ruiz: > > > >> This is interesting. Want to tell more about your setup? CLVM? > > > >> iSCSI? > > > > > > > > nope, just AoE and LVM > > > > > > LVM and not CLVM? i guess that means that you use LVM at the storage > > > boxes and export (with vblade) the LVs? > > > > My Storage-Header-Box (which is also the openqrm-server) has sw-raid > > over a bunch of intel-servers with vblade. > > > > So i created some mdX and put them in a VG. > > > > /dev/md0 ---> /dev/md19 are my AoE-Backend and r PVs in a VG. > > > > On this Storage-Header im exporting the Lvols through AoE again. So my > > XEN-Boxes grab their LVols.... > > > > Sounds confusing, but maybe a drawing would show up the concept > > > > Do you run LVM on dom0''s for domU disks?nope - just phy:/dev/etherd/e0.1.... the domu works directly on the blade without partitioning. so i can snapshot, lvextend and resizefs on the openqrm-server.> How many ''SAN disks'' each dom0 sees?output of aoestat is the same on all dom0s - so each dom0 sees all disks - otherwise live-migration would not be working....> > -- PasiThomas _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Nov 28, 2008 at 12:12:01AM +0100, Thomas Halinka wrote:> Hi Pasi, > > Am Donnerstag, den 27.11.2008, 21:25 +0200 schrieb Pasi Kärkkäinen: > > On Thu, Nov 27, 2008 at 05:10:30PM +0100, Thomas Halinka wrote: > > > Am Samstag, den 22.11.2008, 20:19 +0400 schrieb Nicolas Ruiz: > > > > Hi all, > > > > > > > > I''m pretty interesting study this kind of solution for my office. > > > > Following questions from Pasi, i would like to know from you, Thomas, > > > > if you are using a SAN for your cluster. > > > > > > i build up my own SAN with mdadm, lvm und vblade. > > > > > > > If so, what kind of data access technologies you use with. > > > > > > ATAoverEthernet, which sends ATA-commands over Ethernet (Layer2). It''s > > > something like SAN over Ethernet and much faster than iscsi, since no > > > tcp/ip is used. also failover was very tricky with iscsi.... > > > > > > > OK. > > > > > > > > > > > > > Last question, how do you manage HA, Live migration and snapshots : > > > > owned scripts ? > > > > > > heartbeat2 with crm and constraints and the rest is managed through > > > openqrm. > > > > > > > Hmm.. so openqrm can take care of locking domU disks in dom0''s for live > > migration? ie. making sure only single dom0 accesses domU disks at a time.. > > > > > > > > > > This is interesting. Want to tell more about your setup? CLVM? > > > > iSCSI? > > > > > > nope, just AoE and LVM > > > > > > > > > > > What kind of physical server hardware? What kind of storage? > > > > > > It s self-build. We had evaluated FC-SAN-Solutions, but they were slow, > > > unflexible and very expensive. We ''re using Standard-Server with bonding > > > over 10Gbit-NICs > > > > > > This setup transfers 1300 MB/s at the moment, is highly scaleable and > > > was about 70% cheaper than a FC-Solution. > > > > > > > Ok. > > > > > > > > > > What exact kernel and Xen versions? > > > > > > at the moment its xen 3.2 and 2.6.18-Kernel. I am evaluating 3.3 and > > > 2.6.26 atm. > > > > > > > > > > > Thanks! > > > > > > > > -- Pasi > > > > > > > > > If interested in this Setup, i could get you a overview with a small > > > abstract, what is managed where and why... you know ;) > > > > > > > Yeah.. picture would be nice :) > > you can get it here: http://openqrm.com/storage-cluster.png > > Some words to say: > > - openqrm-server has mdadm started and sees all mdX-Devices > - openqrm-server knows a vg "data" with all mdX-Devices inside > - openqrm-server exports "lvol" to the LAN > - openqrm-server provides a boot-service (pxe), which: deploys a > XEN-Image to xen_1-X and puts this ressource into a puppet-class > > in this xen-image is heartbeat2 with crm and constraints implemented. > puppet only alters the config for me... > > Some explanations: > > - all the storage-boxes are standard-server with 4xGB-NICs, 24-SATA on > Areca Raid6 (areca is impressive, since write-performance of raid 5 > raid 6 = raid 0). Only small OS and the rest of HDD is exported through > vblade. > - header_a und b is heartbeat v1 cluster with drbd. drbd mirrors the > data for openqrm and heartbeat does HA for openqrm > - openqrm itself is the storage-header exporting all the data from the > storage-boxes to the clients > - openqrm-boot-service deploys a xen-image and puppet-configuration to > this xen-servers. > - all xen-server see all vblades and shelfes > - xen-vms resist on aoe-blades, so snapshotting, lvextend, resize2fs is > possible online > > Scalability: > Storage: go buy 3 new server, put a bunch of harddisk inside, install > linux, install vblade and fire them. on the openqrm-server you only have > to create a new-md and extend the volume-group > Performance: buy a new Server and let him pxe-boot, create a new > appliance and watch your server rebooting, starting xen and > participating the cluster. > > We Started the Cluster with about 110 GB-Storage - at the moment we have > about 430 GB Data and have to extend up to 1,2 PB in Summer 2009, which > is no problem. > > > No - go and search for a SAN-Solution like this and ask for the price ;) > > http://www.storageperformance.org/results/benchmark_results_spc2 shows > some-fc-solutions..... > > i guess that we will be in summer in this performance-regions with about > 30 % of the costs and much more flebility. > http://www.storageperformance.org/results/b00035_HP-XP24000_SPC2_full-disclosure.pdf > > Price: Total: $ 1,635,434 > > > > > And thanks for the answer! > > ay - cheers! > > i will end up this post with some words of Coraid CEO Kemp: > "... We truly are a SAN solution, but SAN is not in the vocabulary of > Linux people, because SAN is equated with fiber channel, and fiber > channel is too expensive. But now, there''s ''poor man SAN" [1] > > > > > -- Pasi > > Thomas > > Any Questions - ask me ;) > > [1] http://www.linuxdevices.com/news/NS3189760067.html >Pretty nice setup you have there :) Thanks for the explanation. It was nice to see details about pretty big setup. Have you had any problems with it? How about failovers from header a to b.. do they cause any problems? -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users