Harald van Pee
2007-May-13 16:35 UTC
[Lustre-discuss] lustre 1.6.0.1 on debian sarge with 2.6.18.8 upgrade from 1.5.95(beta5)
Hi all, I just have installed lustre 1.6.0.1 and before I will have forgotten how I just want to post it: 0. I habe unmounted all clients and servers 1. I installed kernel 2.6.18.8, reboot and test if everything is working. 2. I have installed e2fsprogs-1.39.cfs7-0redhat.i386.rpm with alien (reboot and test if everything is working). 3. applied all patches from lustre-1.6.0.1/lustre/kernel_patches/series/2.6.18-vanilla.series lustre-1.6.0.1/lustre/kernel_patches/series/ldiskfs-2.6.18-vanilla.series just with patch cd /usr/src/linux patch -p1 < /usr/src/lustre-1.6.0.1/lustre/kernel_patches/patches/lustre_version.patch ... 4. The patches mentioned in https://mail.clusterfs.com/pipermail/lustre-discuss/2006-October/002263.html https://mail.clusterfs.com/pipermail/lustre-discuss/2006-October/002256.html still have to be applied 5. ./configure --with-linux=/usr/src/linux --disable-quilt 6. make; make install unfortunately I have not managed to make debian packages and some old stuff of version 1.5.95 is liying around (and can make trouble?) 7. reboot don'' t forget to configure your network (see Lustre 1.6.x Operations Manual 3.2.1 Module Parameters) 8. because I want to keep my old data I have done tunefs.lustre --writeconf /disk with all server disks 9. remounted all servers, mdt first 10. mount clients there are some error messages in the system log, but it seems that lustre has recovered them. At least I have got no error and the data are available! Up to now I have not done much with lustre 1.6.0.1 but I will do this soon and also will try to use patchless clients. Best regards Harald
Robert LeBlanc
2007-May-14 07:09 UTC
[Lustre-discuss] lustre 1.6.0.1 on debian sarge with 2.6.18.8 upgrade from 1.5.95(beta5)
On 5/13/07 4:35 PM, "Harald van Pee" <pee@hiskp.uni-bonn.de> wrote:> Hi all, > > I just have installed lustre 1.6.0.1 and before I will have forgotten how I > just want to post it: > 0. I habe unmounted all clients and servers > 1. I installed kernel 2.6.18.8, reboot and test if everything is working. > 2. I have installed e2fsprogs-1.39.cfs7-0redhat.i386.rpm with alien > (reboot and test if everything is working). > 3. applied all patches from > lustre-1.6.0.1/lustre/kernel_patches/series/2.6.18-vanilla.series > lustre-1.6.0.1/lustre/kernel_patches/series/ldiskfs-2.6.18-vanilla.series > just with patch > cd /usr/src/linux > patch -p1 > < /usr/src/lustre-1.6.0.1/lustre/kernel_patches/patches/lustre_version.patch > ... > > 4. The patches mentioned in > https://mail.clusterfs.com/pipermail/lustre-discuss/2006-October/002263.html > https://mail.clusterfs.com/pipermail/lustre-discuss/2006-October/002256.html > still have to be applied > > 5. ./configure --with-linux=/usr/src/linux --disable-quilt > > 6. make; make install > unfortunately I have not managed to make debian packages and some old stuff of > version 1.5.95 is liying around (and can make trouble?) > > 7. reboot > > don'' t forget to configure your network (see Lustre 1.6.x Operations Manual > 3.2.1 Module Parameters) > > 8. because I want to keep my old data I have done > tunefs.lustre --writeconf /disk > with all server disks > > 9. remounted all servers, mdt first > > 10. mount clients > there are some error messages in the system log, but it seems that lustre has > recovered them. At least I have got no error and the data are available! > > Up to now I have not done much with lustre 1.6.0.1 but I will do this soon and > also will try to use patchless clients. > > Best regards > Harald > > > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >You have gotten lustre to build and run with 2.6.18.8? I tried doing that on etch with no luck. I applied the same patches, and lustre would build but would not run. I didn''t have time to work on it so I just repackaged SUSE kernels and source. I really will need to get to a >2.6.18 kernel in the next month when we get our new Infiniband cluster in. If you don''t mind, keep me posted if you run into any troubles. I will try to disable quilt and rebuild lustre (about the only step you did differently). Robert Robert LeBlanc BioAg Computer Support Brigham Young University leblanc@byu.edu (801)422-1882
Harald van Pee
2007-May-14 07:38 UTC
[Lustre-discuss] lustre 1.6.0.1 on debian sarge with 2.6.18.8 upgrade from 1.5.95(beta5)
On Monday 14 May 2007 03:09 pm, Robert LeBlanc wrote:> > You have gotten lustre to build and run with 2.6.18.8?Yes! Indeed I run lustre 1.5.95 with kernel 2.6.18.6 since two monthes in a test environment (3xost, 1mds/mdt, 20 cpu-cores as clients) without any serious problems (beside of the known problems of the lustre beta 5). Unfortunately I just can use gigabit ethernet at the moment. But I plan to use etch with infiniband at least as a client in the near future. Is it possible to use the etch e2fsprogs-1.39+1.40-WIP-2006.11.1 or is it better or necessary to use e2fsprogs-1.39.cfs7 ? There exists an debian package in the unstable branch http://packages.qa.debian.org/l/lustre.html but its still on version 1.5.97, does anybody know if there are problems with 1.6.0 and etch? Harald> I tried doing that > on etch with no luck. I applied the same patches, and lustre would build > but would not run. I didn''t have time to work on it so I just repackaged > SUSE kernels and source. I really will need to get to a >2.6.18 kernel in > the next month when we get our new Infiniband cluster in. If you don''t > mind, keep me posted if you run into any troubles. I will try to disable > quilt and rebuild lustre (about the only step you did differently). > > Robert > > Robert LeBlanc > BioAg Computer Support > Brigham Young University > leblanc@byu.edu > (801)422-1882 >
Robert LeBlanc
2007-May-14 08:04 UTC
[Lustre-discuss] lustre 1.6.0.1 on debian sarge with 2.6.18.8 upgrade from 1.5.95(beta5)
I''ve compiled and ran lustre with vanilla kernel 2.6.12 and SUSE 2.6.16 with e2fsprogs-1.39+1.40-WIP-2006.11.14+dfsg-2 without any problems or issues. I tried installing the Lustre debs on etch, but there were broken dependencies and there is no way I''m going to run unstable in my production environment. Our NFS server was too hammered and I had to put something in quick to relieve the NFS server and keep our jobs from going to pot. I tried to download the source packages, but getting them to compile seemed more difficult then getting the lustre tar to compile, or even aliening the SUSE packages that had everything patched already. Now that the fire is put out, I''ll look more into building from source. I''ve been running beta5 and 1.6.0 on etch for a couple of weeks. I''ve only been working with lustre for a couple of weeks so far though. Seems solid and we haven''t run into any problems thus far. Robert On 5/14/07 7:38 AM, "Harald van Pee" <pee@hiskp.uni-bonn.de> wrote:> On Monday 14 May 2007 03:09 pm, Robert LeBlanc wrote: >> >> You have gotten lustre to build and run with 2.6.18.8? > > Yes! Indeed I run lustre 1.5.95 with kernel 2.6.18.6 since two monthes in a > test environment > (3xost, 1mds/mdt, 20 cpu-cores as clients) > without any serious problems > (beside of the known problems of the lustre beta 5). > Unfortunately I just can use gigabit ethernet at the moment. But I plan to use > etch with infiniband at least as a client in the near future. > > Is it possible to use the etch e2fsprogs-1.39+1.40-WIP-2006.11.1 > or is it better or necessary to use > e2fsprogs-1.39.cfs7 > ? > > There exists an debian package in the unstable branch > http://packages.qa.debian.org/l/lustre.html > but its still on version 1.5.97, does anybody know if there are problems > with 1.6.0 and etch? > > Harald > > >> I tried doing that >> on etch with no luck. I applied the same patches, and lustre would build >> but would not run. I didn''t have time to work on it so I just repackaged >> SUSE kernels and source. I really will need to get to a >2.6.18 kernel in >> the next month when we get our new Infiniband cluster in. If you don''t >> mind, keep me posted if you run into any troubles. I will try to disable >> quilt and rebuild lustre (about the only step you did differently). >> >> Robert >> >> Robert LeBlanc >> BioAg Computer Support >> Brigham Young University >> leblanc@byu.edu >> (801)422-1882 >> > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss >Robert LeBlanc BioAg Computer Support Brigham Young University leblanc@byu.edu (801)422-1882
Andreas Dilger
2007-May-15 00:23 UTC
[Lustre-discuss] lustre 1.6.0.1 on debian sarge with 2.6.18.8 upgrade from 1.5.95(beta5)
On May 14, 2007 07:09 -0600, Robert LeBlanc wrote:> > 3. applied all patches from > > lustre-1.6.0.1/lustre/kernel_patches/series/2.6.18-vanilla.series > > lustre-1.6.0.1/lustre/kernel_patches/series/ldiskfs-2.6.18-vanilla.seriesYou shouldn''t really apply the ldiskfs patches to your kernel ext3. Instead let the lustre build process build a new ldiskfs module with the patches. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc.
Harald van Pee
2007-May-15 01:17 UTC
[Lustre-discuss] lustre 1.6.0.1 on debian sarge with 2.6.18.8 upgrade from 1.5.95(beta5)
On Tuesday 15 May 2007 08:23 am, Andreas Dilger wrote:> On May 14, 2007 07:09 -0600, Robert LeBlanc wrote: > > > 3. applied all patches from > > > lustre-1.6.0.1/lustre/kernel_patches/series/2.6.18-vanilla.series > > > lustre-1.6.0.1/lustre/kernel_patches/series/ldiskfs-2.6.18-vanilla.seri > > >es > > You shouldn''t really apply the ldiskfs patches to your kernel ext3. > Instead let the lustre build process build a new ldiskfs module with the > patches. >o.k. thank you!
Harald van Pee
2007-May-15 01:32 UTC
[Lustre-discuss] lustre 1.6.0.1 on debian sarge with 2.6.18.8 upgrade from 1.5.95(beta5)
On Monday 14 May 2007 04:04 pm, Robert LeBlanc wrote:> I''ve compiled and ran lustre with vanilla kernel 2.6.12 and SUSE 2.6.16 > with e2fsprogs-1.39+1.40-WIP-2006.11.14+dfsg-2 without any problems or > issues. > I tried installing the Lustre debs on etch, but there were broken > dependencies and there is no way I''m going to run unstable in my production > environment. Our NFS server was too hammered and I had to put something in > quick to relieve the NFS server and keep our jobs from going to pot. I > tried to download the source packages, but getting them to compile seemed > more difficult then getting the lustre tar to compile, or even aliening the > SUSE packages that had everything patched already. Now that the fire is put > out, I''ll look more into building from source. > > I''ve been running beta5 and 1.6.0 on etch for a couple of weeks. I''ve only > been working with lustre for a couple of weeks so far though. Seems solid > and we haven''t run into any problems thus far.You are running lustre 1.6.0 with SuSE 2.6.16 kernel on debian etch without any problems? Sounds interesting! Do you also use infiniband right now or is this just planned? Harald> > Robert > > On 5/14/07 7:38 AM, "Harald van Pee" <pee@hiskp.uni-bonn.de> wrote: > > On Monday 14 May 2007 03:09 pm, Robert LeBlanc wrote: > >> You have gotten lustre to build and run with 2.6.18.8? > > > > Yes! Indeed I run lustre 1.5.95 with kernel 2.6.18.6 since two monthes in > > a test environment > > (3xost, 1mds/mdt, 20 cpu-cores as clients) > > without any serious problems > > (beside of the known problems of the lustre beta 5). > > Unfortunately I just can use gigabit ethernet at the moment. But I plan > > to use etch with infiniband at least as a client in the near future. > > > > Is it possible to use the etch e2fsprogs-1.39+1.40-WIP-2006.11.1 > > or is it better or necessary to use > > e2fsprogs-1.39.cfs7 > > ? > > > > There exists an debian package in the unstable branch > > http://packages.qa.debian.org/l/lustre.html > > but its still on version 1.5.97, does anybody know if there are problems > > with 1.6.0 and etch? > > > > Harald > > > >> I tried doing that > >> on etch with no luck. I applied the same patches, and lustre would build > >> but would not run. I didn''t have time to work on it so I just repackaged > >> SUSE kernels and source. I really will need to get to a >2.6.18 kernel > >> in the next month when we get our new Infiniband cluster in. If you > >> don''t mind, keep me posted if you run into any troubles. I will try to > >> disable quilt and rebuild lustre (about the only step you did > >> differently). > >>
Robert LeBlanc
2007-May-15 07:15 UTC
[Lustre-discuss] lustre 1.6.0.1 on debian sarge with 2.6.18.8 upgrade from 1.5.95(beta5)
> > You are running lustre 1.6.0 with SuSE 2.6.16 kernel on debian etch without > any problems? Sounds interesting! > Do you also use infiniband right now or is this just planned? >Yes I did alien the SUSE kernel package and ran it. I had to create a cramfs initrd image, and I thought that would be too much of a pain on all my cluster nodes. So I aliened the kernel and source packages and installed them and built Debian packages from them. Now it uses yarid for the initrd image and works great. We are upgrading our three year old cluster that has only 100 Mb with a new one that will only have IB. We are looking to run IPoIB and Fibre Channel over it as well as MPI. I hear that IPoIB and Fibre Channel over IB have matured a lot in >=2.6.18 so I''d really like to move to a newer kernel. Our new cluster will be arriving mind June with a test cluster with our config at the first of June. Robert LeBlanc BioAg Computer Support Brigham Young University leblanc@byu.edu (801)422-1882
Robert LeBlanc
2007-May-15 07:23 UTC
[Lustre-discuss] lustre 1.6.0.1 on debian sarge with 2.6.18.8 upgrade from 1.5.95(beta5)
On 5/15/07 1:17 AM, "Harald van Pee" <pee@hiskp.uni-bonn.de> wrote:> On Tuesday 15 May 2007 08:23 am, Andreas Dilger wrote: >> On May 14, 2007 07:09 -0600, Robert LeBlanc wrote: >>>> 3. applied all patches from >>>> lustre-1.6.0.1/lustre/kernel_patches/series/2.6.18-vanilla.series >>>> lustre-1.6.0.1/lustre/kernel_patches/series/ldiskfs-2.6.18-vanilla.seri >>>> es >> >> You shouldn''t really apply the ldiskfs patches to your kernel ext3. >> Instead let the lustre build process build a new ldiskfs module with the >> patches. >> > > o.k. thank you!Is this different then what is in the documentation? All I have been doing in creating symlinks for series and patches in the kernel source tree root. I then run quilt and apply all the patches and build the kernel. I don''t link to the ldiskfs series because I didn''t read that in the Install file. I then reboot with the new kernel and compile Lustre with just configure, make, make install (well a little different to make Debian packages). So I guess I''m not patching ext3 (although there still is a couple of patches applied) and building a new ldiskfs module. Robert LeBlanc BioAg Computer Support Brigham Young University leblanc@byu.edu (801)422-1882
Harald van Pee
2007-May-15 07:33 UTC
[Lustre-discuss] lustre 1.6.0.1 on debian sarge with 2.6.18.8 upgrade from 1.5.95(beta5)
On Tuesday 15 May 2007 03:23 pm, Robert LeBlanc wrote:> On 5/15/07 1:17 AM, "Harald van Pee" <pee@hiskp.uni-bonn.de> wrote: > > On Tuesday 15 May 2007 08:23 am, Andreas Dilger wrote: > >> On May 14, 2007 07:09 -0600, Robert LeBlanc wrote: > >>>> 3. applied all patches from > >>>> lustre-1.6.0.1/lustre/kernel_patches/series/2.6.18-vanilla.series > >>>> lustre-1.6.0.1/lustre/kernel_patches/series/ldiskfs-2.6.18-vanilla.ser > >>>>i es > >> > >> You shouldn''t really apply the ldiskfs patches to your kernel ext3. > >> Instead let the lustre build process build a new ldiskfs module with the > >> patches. > > > > o.k. thank you!I think the hint above concerns just the case if you will apply the patches by hand with patch than you should not use the ldisk patch series, because its just needed for building the ldiskfs module and done during make if I understand it correct. Harald> > Is this different then what is in the documentation? All I have been doing > in creating symlinks for series and patches in the kernel source tree root. > I then run quilt and apply all the patches and build the kernel. I don''t > link to the ldiskfs series because I didn''t read that in the Install file. > > I then reboot with the new kernel and compile Lustre with just configure, > make, make install (well a little different to make Debian packages). So I > guess I''m not patching ext3 (although there still is a couple of patches > applied) and building a new ldiskfs module. > > Robert LeBlanc > BioAg Computer Support > Brigham Young University > leblanc@byu.edu > (801)422-1882 > >
Kalpak Shah
2007-May-15 07:44 UTC
[Lustre-discuss] lustre 1.6.0.1 on debian sarge with 2.6.18.8 upgrade from 1.5.95(beta5)
On Tue, 2007-05-15 at 15:33 +0200, Harald van Pee wrote:> On Tuesday 15 May 2007 03:23 pm, Robert LeBlanc wrote: > > On 5/15/07 1:17 AM, "Harald van Pee" <pee@hiskp.uni-bonn.de> wrote: > > > On Tuesday 15 May 2007 08:23 am, Andreas Dilger wrote: > > >> On May 14, 2007 07:09 -0600, Robert LeBlanc wrote: > > >>>> 3. applied all patches from > > >>>> lustre-1.6.0.1/lustre/kernel_patches/series/2.6.18-vanilla.series > > >>>> lustre-1.6.0.1/lustre/kernel_patches/series/ldiskfs-2.6.18-vanilla.ser > > >>>>i es > > >> > > >> You shouldn''t really apply the ldiskfs patches to your kernel ext3. > > >> Instead let the lustre build process build a new ldiskfs module with the > > >> patches. > > > > > > o.k. thank you! > > I think the hint above concerns just the case if you will apply the patches by > hand with patch than you should not use the ldisk patch series, because its > just needed for building the ldiskfs module and done during make if I > understand it correct. >Yes, you do not need to apply the ldiskfs patches by hand. The make command applies the ldiskfs patches using quilt (use --disable-quilt if you wish) to create the ldiskfs modules from the ext3 sources present in the kernel source. Thanks, Kalpak> Harald > > > > > Is this different then what is in the documentation? All I have been doing > > in creating symlinks for series and patches in the kernel source tree root. > > I then run quilt and apply all the patches and build the kernel. I don''t > > link to the ldiskfs series because I didn''t read that in the Install file. > > > > I then reboot with the new kernel and compile Lustre with just configure, > > make, make install (well a little different to make Debian packages). So I > > guess I''m not patching ext3 (although there still is a couple of patches > > applied) and building a new ldiskfs module. > > > > Robert LeBlanc > > BioAg Computer Support > > Brigham Young University > > leblanc@byu.edu > > (801)422-1882 > > > > > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss@clusterfs.com > https://mail.clusterfs.com/mailman/listinfo/lustre-discuss