Lex
2010-Feb-01 15:29 UTC
[Lustre-discuss] Reply High difference in I/O network traffic in lustre client
From: Lex <lexluthor87 at gmail.com> Date: Mon, Feb 1, 2010 at 10:28 PM Subject: Re: [Lustre-discuss] High difference in I/O network traffic in lustre client To: Mag Gam <magawake at gmail.com> I have 8 OSSs and 8 OSTs. Hadware info: CPU Intel(R) xeon E5420 2.5 Ghz Chipset intel 5000P 8GB RAM 8 x 1.5TB hard disks, divided into 2 arrays with raid controller adaptec 5805 We using 2 x 1Gigabit Ethernet card with linux bonding ( OS is centos 5.3 ). Our lustre client work as web server for downloading file, so there are many files has been read by web client, i can''t provide you an exact number. ( we have about millions file in our lustre storage system, unfortunately, there are quite a lot small file: a linux soft links ) Files are "striped" over each 2 OSTs, some are striped over all our OSTs ( fewer than 2 OSTs parallel striping ) Do you have any idea for my issue ? Many thanks On Mon, Feb 1, 2010 at 8:05 PM, Mag Gam <magawake at gmail.com> wrote:> How many OSS and OSTs do you have ? What type of hardware are they > running on? What type of network connection? The file you are trying > to access what OSS is it on? Are the files striped? > > > > What > > On Mon, Feb 1, 2010 at 4:44 AM, Lex <lexluthor87 at gmail.com> wrote: > > Hi guys > > > > In effort to improve our storage system performance, i found some strange > > signs but unfortunately, couldn''t explain it by myself. So i post here > for > > all you guys can''t help me to clarify it > > > > I''m using lustre client as web server for downloading file. When our > system > > in a heavy load ( about 12.000 concurrent connection for 8 web server - > > lustre client ), %iowait has been pushed to about 98%, load average was > > about 1-2000 !!!! ( just because of %iowait, i still could manipulate > > normally almost every command over ssh ) i think it''s a terrible number > in > > describing load average ! But, at that case, the in and out network > traffic > > are almost the same ( although just about few MB/s :( ) > > > > The odd thing is, right now, when we only have about 3.500 concurrent > > connection, load average is about 50 ( still too big, right ? ), iowait > is > > about 70%, the difference between receive and transmit network is too > hight, > > about 10-20MB ( see attached file, please ) > > > > We just have about 20 connection for our local lustre storage system: > > > > netstat -nat | grep 192.168.1.75 > > tcp 0 560 192.168.1.75:1023 192.168.1.85:988 > > ESTABLISHED > > tcp 0 0 192.168.1.75:1023 192.168.1.81:988 > > ESTABLISHED > > tcp 0 0 192.168.1.75:988 192.168.1.85:1023 > > ESTABLISHED > > tcp 0 0 192.168.1.75:988 192.168.1.85:1022 > > ESTABLISHED > > tcp 0 0 192.168.1.75:988 192.168.1.81:1023 > > ESTABLISHED > > tcp 0 0 192.168.1.75:988 192.168.1.81:1022 > > ESTABLISHED > > tcp 0 0 192.168.1.75:988 192.168.1.100:1023 > > ESTABLISHED > > tcp 0 0 192.168.1.75:1021 192.168.1.78:988 > > ESTABLISHED > > tcp 0 0 192.168.1.75:1023 192.168.1.78:988 > > ESTABLISHED > > tcp 0 0 192.168.1.75:1022 192.168.1.78:988 > > ESTABLISHED > > tcp 0 560 192.168.1.75:1023 192.168.1.100:988 > > ESTABLISHED > > > > and about 400 connection with client from internet : > > > > netstat -nat | grep out_wan_ip | grep EST | wc -l > > 407 > > > > We''re currently using 2 Gigabit Ethernet card, one for 192.168.1.0/24 > > network for lnet and the other as wan ip for delivering file out to > internet > > and about 15MB/s thoughput was "lost" somehow !!!! > > > > So, my question is: > > > > - Is there anyone have idea or hint about high load situation with our > > lustre client - web server like i described above ? I followed this link > > and found out kjournald process is the main main "culprit" ( with our > ost, > > it was "ll" process ) > > - What makes the too high difference between receive and transit > direction > > in our lustre client - web server ? > > > > > > i''m really stressed with poor performance in our storage system and hope > > anyone here can help me point out some thing > > > > Any help would be highly appreciated > > > > Best regards > > > > > > > > > > > > _______________________________________________ > > Lustre-discuss mailing list > > Lustre-discuss at lists.lustre.org > > http://lists.lustre.org/mailman/listinfo/lustre-discuss > > > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20100201/1aa16a32/attachment-0001.html
Andreas Dilger
2010-Feb-01 20:25 UTC
[Lustre-discuss] Reply High difference in I/O network traffic in lustre client
On 2010-02-01, at 08:29, Lex wrote:> I have 8 OSSs and 8 OSTs. Hadware info: > > CPU Intel(R) xeon E5420 2.5 Ghz Chipset intel 5000P > 8GB RAM > 8 x 1.5TB hard disks, divided into 2 arrays with raid controller > adaptec 5805 > > We using 2 x 1Gigabit Ethernet card with linux bonding ( OS is > centos 5.3 ). Our lustre client work as web server for downloading > file, so there are many files has been read by web client, i can''t > provide you an exact number. ( we have about millions file in our > lustre storage system, unfortunately, there are quite a lot small > file: a linux soft links ) Files are "striped" over each 2 OSTs, > some are striped over all our OSTs ( fewer than 2 OSTs parallel > striping ) > > Do you have any idea for my issue ?If you are using small files, you shouldn''t be striping your files over multiple OSTs. That is increasing the workload on the OSTs (size, lock overhead) without providing any benefits because the data is only stored on the first OST (assuming 1MB stripe size, and file size <= 1MB).> On Mon, Feb 1, 2010 at 8:05 PM, Mag Gam <magawake at gmail.com> wrote: > How many OSS and OSTs do you have ? What type of hardware are they > running on? What type of network connection? The file you are trying > to access what OSS is it on? Are the files striped? > > > > What > > On Mon, Feb 1, 2010 at 4:44 AM, Lex <lexluthor87 at gmail.com> wrote: > > Hi guys > > > > In effort to improve our storage system performance, i found some > strange > > signs but unfortunately, couldn''t explain it by myself. So i post > here for > > all you guys can''t help me to clarify it > > > > I''m using lustre client as web server for downloading file. When > our system > > in a heavy load ( about 12.000 concurrent connection for 8 web > server - > > lustre client ), %iowait has been pushed to about 98%, load > average was > > about 1-2000 !!!! ( just because of %iowait, i still could > manipulate > > normally almost every command over ssh ) i think it''s a terrible > number in > > describing load average ! But, at that case, the in and out > network traffic > > are almost the same ( although just about few MB/s :( ) > > > > The odd thing is, right now, when we only have about 3.500 > concurrent > > connection, load average is about 50 ( still too big, right ? ), > iowait is > > about 70%, the difference between receive and transmit network is > too hight, > > about 10-20MB ( see attached file, please ) > > > > We just have about 20 connection for our local lustre storage > system: > > > > netstat -nat | grep 192.168.1.75 > > tcp 0 560 192.168.1.75:1023 192.168.1.85:988 > > ESTABLISHED > > tcp 0 0 192.168.1.75:1023 192.168.1.81:988 > > ESTABLISHED > > tcp 0 0 192.168.1.75:988 192.168.1.85:1023 > > ESTABLISHED > > tcp 0 0 192.168.1.75:988 192.168.1.85:1022 > > ESTABLISHED > > tcp 0 0 192.168.1.75:988 192.168.1.81:1023 > > ESTABLISHED > > tcp 0 0 192.168.1.75:988 192.168.1.81:1022 > > ESTABLISHED > > tcp 0 0 192.168.1.75:988 192.168.1.100:1023 > > ESTABLISHED > > tcp 0 0 192.168.1.75:1021 192.168.1.78:988 > > ESTABLISHED > > tcp 0 0 192.168.1.75:1023 192.168.1.78:988 > > ESTABLISHED > > tcp 0 0 192.168.1.75:1022 192.168.1.78:988 > > ESTABLISHED > > tcp 0 560 192.168.1.75:1023 192.168.1.100:988 > > ESTABLISHED > > > > and about 400 connection with client from internet : > > > > netstat -nat | grep out_wan_ip | grep EST | wc -l > > 407 > > > > We''re currently using 2 Gigabit Ethernet card, one for > 192.168.1.0/24 > > network for lnet and the other as wan ip for delivering file out > to internet > > and about 15MB/s thoughput was "lost" somehow !!!! > > > > So, my question is: > > > > - Is there anyone have idea or hint about high load situation with > our > > lustre client - web server like i described above ? I followed > this link > > and found out kjournald process is the main main "culprit" ( with > our ost, > > it was "ll" process ) > > - What makes the too high difference between receive and transit > direction > > in our lustre client - web server ? > > > > > > i''m really stressed with poor performance in our storage system > and hope > > anyone here can help me point out some thing > > > > Any help would be highly appreciated > > > > Best regards > > > > > > > > > > > > _______________________________________________ > > Lustre-discuss mailing list > > Lustre-discuss at lists.lustre.org > > http://lists.lustre.org/mailman/listinfo/lustre-discuss > > > > > > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss at lists.lustre.org > http://lists.lustre.org/mailman/listinfo/lustre-discussCheers, Andreas -- Andreas Dilger Sr. Staff Engineer, Lustre Group Sun Microsystems of Canada, Inc.
Lex
2010-Feb-02 09:11 UTC
[Lustre-discuss] Reply High difference in I/O network traffic in lustre client
did you measure the performance of this system before lustre?> specificallyTell me exactly what information useful for you to help me diagnose our problem, plz , your symptoms make it look like your disk system> can''t handle the load. since you have lots of small activity, > the issue wouldn''t be bandwidth, but latency. I''ve normally only > seen this on the MDS, where metadata traffic can generate quite high > numbers of transactions, even though the bandwidth is low. >> for instance, is the MDS volume a slow-write form of raid like raid5 or > raid6? MDS activity is mainly small, synchronous transactions > such as directory updates, which is why MDS should be on raid10. >We use raid10 for our MDS and it''s operating quite idle. Below is some info about load average and network traffic ( output from w and bmon command ) It isn''t too high to make the delay, right ? *load average: 0.05, 0.10, 0.09 Name RX TX ???????????????????????????? ????????????????????????? MDS1 (local) ? Rate # % ? Rate # % 0 lo ? 0 B 0 ? 0 B 0 1 eth0 ? 22 B 0 ? 344.59KiB 736 2 eth1 ? 670.49KiB 1.37K ? 267.29KiB 592 3 bond0 ? 670.51KiB 1.38K ? 611.88KiB 1.30K*> are quite a lot small file: a linux soft links ) Files are "striped" over >> > > in a normal filesystem, symlinks are stored in the inode itself, at least > for short symlink targets. I guess that applies to lustre as well - the > symlink would be on the MDS. but there are issues related to the size of > the inode on the MDS, since striping information is also stored in EAs > which are also hopefully within the file''s inode. when there''s too much to > fit into an inode, performance suffers, since the same metadata operations > now require extra seeks. >I will consider this> each 2 OSTs, some are striped over all our OSTs ( fewer than 2 OSTs >> parallel >> striping ) >> > > whether it makes sense to stripe over all OSTs or not depends on the sizes > of your files. but since you have only gigabit, it''s probably not a good > idea. (that is, accessing a striped file won''t be any faster, since it''ll > bottleneck on the client''s network port.) >could you please tell me in detail the disadvantage of 1 Gig Ethernet in using lustre and what exactly the bottleneck in client''s network port is ? ( i tried to install more NIC for client and bonded it together but it didn''t help ) I found in some paper ( got it from google ) that if we using bonding devices with 3 x 1 Gig Ethernet, the problem will be significantly improved. But, in our case, i even couldn''t reach the limit of 1 Gig !!!> > Do you have any idea for my issue ? >> > > I think you need to find out whether the performance problem is merely > due to latency (metadata rate) on the MDS. looking at normal performance > metrics on the MDS when under load (/proc/partitions, etc) might be able > to show this. even "vmstat 1" may be informative, to see what sorts of > blocks-per-second IO rates you''re getting. > >Here is output of vmstat 1 in 10 seconds *root at MDS1: ~ # vmstat 1 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 140 243968 3314424 432776 0 0 1 6 2 1 0 2 97 1 0 0 0 140 244092 3314424 432776 0 0 0 4 3037 6938 0 2 97 1 0 0 0 140 244092 3314424 432776 0 0 0 4 2980 6759 0 2 98 1 0 0 0 140 244216 3314424 432776 0 0 0 16 3574 8966 0 3 94 3 0 0 0 140 244092 3314424 432776 0 0 0 4 3511 8639 1 2 97 1 0 0 1 140 244092 3314424 432776 0 0 0 36 3549 8871 0 2 97 1 0 0 0 140 244092 3314424 432776 0 0 0 4 3085 7304 0 2 97 1 0 0 0 140 243968 3314424 432776 0 0 0 20 3199 7566 0 2 97 1 0 0 0 140 244092 3314424 432776 0 0 0 16 3294 7950 0 2 95 3 0 0 0 140 244092 3314424 432776 0 0 0 4 3336 8301 0 2 97 1 0* and iostat -m 1 5 Linux 2.6.18-92.1.17.el5_lustre.1.8.0custom (MDS1) 02/02/2010 avg-cpu: %user %nice %system %iowait %steal %idle 0.17 0.02 1.53 1.33 0.00 96.96 Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn sda 3.66 0.00 0.02 12304 79721 drbd1 6.43 0.00 0.02 10709 70302 avg-cpu: %user %nice %system %iowait %steal %idle 0.75 0.00 2.24 0.75 0.00 96.26 Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn sda 1.00 0.00 0.00 0 0 drbd1 1.00 0.00 0.00 0 0 avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 1.75 1.00 0.00 97.24 Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn sda 4.00 0.00 0.05 0 0 drbd1 1.00 0.00 0.00 0 0 avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 2.00 3.50 0.00 94.50 Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn sda 3.00 0.00 0.02 0 0 drbd1 4.00 0.00 0.02 0 0 avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 2.49 0.75 0.00 96.76 Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn sda 1.00 0.00 0.00 0 0 drbd1 1.00 0.00 0.00 0 0 I don''t think our mds is too busy ( do correct me if i have a wrong comment on our own situation, plz ) Do you have any ideas or comment Many many thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20100202/9accc80b/attachment-0001.html
Lex
2010-Feb-02 09:15 UTC
[Lustre-discuss] Reply High difference in I/O network traffic in lustre client
That was my mistake, i know the disadvantage / advantage of using stripe or not, but ...it was that. I am finding the way to re-arrange our small files and soft links. Do you have any hint ?> If you are using small files, you shouldn''t be striping your files over > multiple OSTs. That is increasing the workload on the OSTs (size, lock > overhead) without providing any benefits because the data is only stored on > the first OST (assuming 1MB stripe size, and file size <= 1MB).And, what about the difference in I/O network traffic, do you have any guide ? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20100202/ef94c194/attachment.html
Lex
2010-Feb-02 09:21 UTC
[Lustre-discuss] Reply High difference in I/O network traffic in lustre client
I attached some screen shot to help you to easy to see the result of iostat and bmon :) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20100202/80f557d0/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: mds_traf.jpg Type: image/jpeg Size: 68010 bytes Desc: not available Url : http://lists.lustre.org/pipermail/lustre-discuss/attachments/20100202/80f557d0/attachment-0002.jpg -------------- next part -------------- A non-text attachment was scrubbed... Name: mds-io.jpg Type: image/jpeg Size: 79095 bytes Desc: not available Url : http://lists.lustre.org/pipermail/lustre-discuss/attachments/20100202/80f557d0/attachment-0003.jpg
Atul Vidwansa
2010-Feb-03 10:35 UTC
[Lustre-discuss] Reply High difference in I/O network traffic in lustre client
Hi Lex, If you are using Lustre for small files, another blog post of mine would be helpful. Have a look at http://blogs.sun.com/atulvid/entry/improving_performance_of_small_files The post includes some tips on improving performance of small files and also has recommendations on tools to use for benchmarking. Cheers, _Atul Lex wrote:> > > did you measure the performance of this system before lustre? > specifically > > > Tell me exactly what information useful for you to help me diagnose > our problem, plz > > , your symptoms make it look like your disk system > can''t handle the load. since you have lots of small activity, > the issue wouldn''t be bandwidth, but latency. I''ve normally only > seen this on the MDS, where metadata traffic can generate quite > high numbers of transactions, even though the bandwidth is low. > > > > for instance, is the MDS volume a slow-write form of raid like > raid5 or raid6? MDS activity is mainly small, synchronous > transactions > such as directory updates, which is why MDS should be on raid10. > > > We use raid10 for our MDS and it''s operating quite idle. Below is some > info about load average and network traffic ( output from w and bmon > command ) It isn''t too high to make the delay, right ? > > /load average: 0.05, 0.10, 0.09 > > Name RX TX > ???????????????????????????? ????????????????????????? > MDS1 (local) ? Rate # % ? > Rate # % > 0 lo ? 0 B 0 > ? 0 B 0 > 1 eth0 ? 22 B 0 ? > 344.59KiB 736 > 2 eth1 ? 670.49KiB 1.37K ? > 267.29KiB 592 > 3 bond0 ? 670.51KiB 1.38K ? > 611.88KiB 1.30K/ > > > are quite a lot small file: a linux soft links ) Files are > "striped" over > > > in a normal filesystem, symlinks are stored in the inode itself, > at least for short symlink targets. I guess that applies to > lustre as well - the symlink would be on the MDS. but there are > issues related to the size of the inode on the MDS, since striping > information is also stored in EAs > which are also hopefully within the file''s inode. when there''s > too much to > fit into an inode, performance suffers, since the same metadata > operations > now require extra seeks. > > > I will consider this > > > each 2 OSTs, some are striped over all our OSTs ( fewer than 2 > OSTs parallel > striping ) > > > whether it makes sense to stripe over all OSTs or not depends on > the sizes of your files. but since you have only gigabit, it''s > probably not a good idea. (that is, accessing a striped file > won''t be any faster, since it''ll bottleneck on the client''s > network port.) > > > could you please tell me in detail the disadvantage of 1 Gig Ethernet > in using lustre and what exactly the bottleneck in client''s network > port is ? ( i tried to install more NIC for client and bonded it > together but it didn''t help ) > > I found in some paper ( got it from google ) that if we using bonding > devices with 3 x 1 Gig Ethernet, the problem will be significantly > improved. But, in our case, i even couldn''t reach the limit of 1 Gig !!! > > > > Do you have any idea for my issue ? > > > I think you need to find out whether the performance problem is merely > due to latency (metadata rate) on the MDS. looking at normal > performance > metrics on the MDS when under load (/proc/partitions, etc) might > be able > to show this. even "vmstat 1" may be informative, to see what > sorts of blocks-per-second IO rates you''re getting. > > > Here is output of vmstat 1 in 10 seconds > > /root at MDS1: ~ # vmstat 1 > procs -----------memory---------- ---swap-- -----io---- --system-- > -----cpu------ > r b swpd free buff cache si so bi bo in cs us > sy id wa st > 1 0 140 243968 3314424 432776 0 0 1 6 2 1 0 > 2 97 1 0 > 0 0 140 244092 3314424 432776 0 0 0 4 3037 6938 0 > 2 97 1 0 > 0 0 140 244092 3314424 432776 0 0 0 4 2980 6759 0 > 2 98 1 0 > 0 0 140 244216 3314424 432776 0 0 0 16 3574 8966 0 > 3 94 3 0 > 0 0 140 244092 3314424 432776 0 0 0 4 3511 8639 1 > 2 97 1 0 > 0 1 140 244092 3314424 432776 0 0 0 36 3549 8871 0 > 2 97 1 0 > 0 0 140 244092 3314424 432776 0 0 0 4 3085 7304 0 > 2 97 1 0 > 0 0 140 243968 3314424 432776 0 0 0 20 3199 7566 0 > 2 97 1 0 > 0 0 140 244092 3314424 432776 0 0 0 16 3294 7950 0 > 2 95 3 0 > 0 0 140 244092 3314424 432776 0 0 0 4 3336 8301 0 > 2 97 1 0/ > > and iostat -m 1 5 > > Linux 2.6.18-92.1.17.el5_lustre.1.8.0custom (MDS1) 02/02/2010 > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.17 0.02 1.53 1.33 0.00 96.96 > > Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn > sda 3.66 0.00 0.02 12304 79721 > drbd1 6.43 0.00 0.02 10709 70302 > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.75 0.00 2.24 0.75 0.00 96.26 > > Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn > sda 1.00 0.00 0.00 0 0 > drbd1 1.00 0.00 0.00 0 0 > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.00 0.00 1.75 1.00 0.00 97.24 > > Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn > sda 4.00 0.00 0.05 0 0 > drbd1 1.00 0.00 0.00 0 0 > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.00 0.00 2.00 3.50 0.00 94.50 > > Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn > sda 3.00 0.00 0.02 0 0 > drbd1 4.00 0.00 0.02 0 0 > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.00 0.00 2.49 0.75 0.00 96.76 > > Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn > sda 1.00 0.00 0.00 0 0 > drbd1 1.00 0.00 0.00 0 0 > > I don''t think our mds is too busy ( do correct me if i have a wrong > comment on our own situation, plz ) > > Do you have any ideas or comment > > Many many thanks > > ------------------------------------------------------------------------ > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss at lists.lustre.org > http://lists.lustre.org/mailman/listinfo/lustre-discuss >
Hi all, I try to access the download side via the lustre home page (http://www.lustre.org) clicking ''Get Lustre'' link. Till recently an overview page to chose from different Lustre version was displayed. At the moment the ''authentification page'' for Lustre 1.8.1.1 is displayed and login doesn''t work anymore. Does somebody knows about a workaround (besides git) or an other download location. Many thanks in advance. -Frank ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------ Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzende des Aufsichtsrats: MinDir''in Baerbel Brumme-Bothe Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender), Dr. Ulrich Krafft (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------
On 2010-02-05, at 01:13, Frank Heckes wrote:> I try to access the download side via the lustre home page > (http://www.lustre.org) clicking ''Get Lustre'' link. Till recently an > overview page to chose from different Lustre version was displayed. At > the moment the ''authentification page'' for Lustre 1.8.1.1 is displayed > and login doesn''t work anymore. > Does somebody knows about a workaround (besides git) or an other > download location. Many thanks in advance.Please see Peter Jones'' recent email on this topic. When the Sun website was redirected over to the Oracle webserver some of the links were broken. The URL to use for now is http://www.sun.com/download/index.jsp?tab=2&check_1=on Cheers, Andreas -- Andreas Dilger Sr. Staff Engineer, Lustre Group Sun Microsystems of Canada, Inc.
Yes, I had hoped that this would be long fixed by now, but it is still a but confusing and hard to find things (not to mention the large "Buy Now" button that leads from the Oracle Lustre page to the free download :-) ) It is just due to changes being made en masse without a review by anyone at Sun(as only very limited contact was permitted before the deal completed). We are trying to get this all cleaned up as we roll 1.8.2 onto the download site (before anyone asks the release is available from git and I cannot say with any certainty how long doing the rollout onto the download site will take during the integration, but we hope any day) Andreas Dilger wrote:> On 2010-02-05, at 01:13, Frank Heckes wrote: > >> I try to access the download side via the lustre home page >> (http://www.lustre.org) clicking ''Get Lustre'' link. Till recently an >> overview page to chose from different Lustre version was displayed. At >> the moment the ''authentification page'' for Lustre 1.8.1.1 is displayed >> and login doesn''t work anymore. >> Does somebody knows about a workaround (besides git) or an other >> download location. Many thanks in advance. >> > > > Please see Peter Jones'' recent email on this topic. When the Sun > website was redirected over to the Oracle webserver some of the links > were broken. The URL to use for now is http://www.sun.com/download/index.jsp?tab=2&check_1=on > > Cheers, Andreas > -- > Andreas Dilger > Sr. Staff Engineer, Lustre Group > Sun Microsystems of Canada, Inc. > > _______________________________________________ > Lustre-discuss mailing list > Lustre-discuss at lists.lustre.org > http://lists.lustre.org/mailman/listinfo/lustre-discuss >