similar to: setup of patchless lustre client

Displaying 20 results from an estimated 1000 matches similar to: "setup of patchless lustre client"

2007 Oct 08
5
patchless client on RHEL4
Is there instructions on how to use the patchless client on RHEL4 ? For version 1.6.2 We would prefer a rpm, but we are not scared of doing a build if needed. Brock Palen Center for Advanced Computing brockp at umich.edu (734)936-1985
2012 Oct 31
3
lustre client on arm debian
Hi, has anyone tried to compile the lustre patchless client on a debian linux for arm architecture? Could be possible to do? Thanks in advance.
2008 Mar 25
2
patchless kernel
Dear All, make[5]: Entering directory `/usr/src/kernels/2.6.23.15-80.fc7-x86_64'' /usr/src/redhat/BUILD/lustre-1.6.4.3/lustre/llite/lloop.c:142: warning: ''request_queue_t'' is deprecated /usr/src/redhat/BUILD/lustre-1.6.4.3/lustre/llite/lloop.c:273: warning: ''request_queue_t'' is deprecated /usr/src/redhat/BUILD/lustre-1.6.4.3/lustre/llite/lloop.c:312:
2009 Apr 21
1
Lustre patchless client with ofed-1.4
I am trying to compile lustre-1.6.6 against ofed-1.4 using following configure options : ./configure --disable-server --with-o2ib=/usr/src/ofa_kernel-1.4 --with-linux=/usr/src/kernels/2.6.18-92.el5-x86_64 I am getting following errors: configure: error: can''t compile with OpenIB gen2 headers under /usr/src/ofa_kernel-1.4 Is it that lustre-1.6.6 will only work with ofed-1.3. or I am
2008 Mar 26
7
Lustre Solution Delivery
What is the process for integrating a Lustre+SAMFS solution into an existing customer environment. The plan is to have CRS build the Lustre component, but Lustre and SAMFS will need to configured and integrated into the customer computing environment. I am very familiar with the SAMFS integration, but not Lustre integration. Do we have resources in PS to provide the integration? Is this
2008 Jan 15
19
How do you make an MGS/OSS listen on 2 NICs?
I am running on CentOS 5 distribution without adding any updates from CentOS. I am using the lustre 1.6.4.1 kernel and software. I have two NICs that run though different switches. I have the lustre options in my modprobe.conf to look like this: options lnet networks=tcp0(eth1,eth0) My MGS seems to be only listening on the first interface however. When I try and ping the 1st interface (eth1)
2007 Nov 12
8
More failover issues
In 1.6.0, when creating a MDT, you could specify multiple --mgsnode options and it would failover between them. 1.6.3 only seems to take the last one and --mgsnode=192.168.1.252 at o2ib:192.168.1.253 at o2ib doesn''t seem to failover to the other node. Any ideas how to get around this? Robert Robert LeBlanc College of Life Sciences Computer Support Brigham Young University leblanc at
2006 May 03
0
ANNOUCEMENT - LUSTRE ROLL for Rocks 4.1
Dear all Scalable Systems is pleased to annouce the availability of the Lustre Roll for Rocks 4.1 (Preview Release). The Lustre Roll contains the latest Lustre 1.4.6 version software packaged to work with Rocks 4.1. This is a *PREVIEW* release based on V1.4.6. The final and production release which we will support officially will be based on V1.6. Note that the Lustre Roll will install Lustre
2007 Jan 17
0
Lustre 1.6.0 beta7 is now available
NOTE: BETA SOFTWARE, NOT FOR PRODUCTION USE Cluster File Systems is pleased to announce the next beta version of Lustre 1.6, which includes the following new features: * Dynamic service threads - within a small range, extra service threads are started automatically when the request queue builds up. * Mixed-endian environment fixes * Easy permanent OST removal * MGS failover * MGS proc
2007 Jan 17
0
Lustre 1.6.0 beta7 is now available
NOTE: BETA SOFTWARE, NOT FOR PRODUCTION USE Cluster File Systems is pleased to announce the next beta version of Lustre 1.6, which includes the following new features: * Dynamic service threads - within a small range, extra service threads are started automatically when the request queue builds up. * Mixed-endian environment fixes * Easy permanent OST removal * MGS failover * MGS proc
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have 1 MDT/MGS and 1 OSS with 2 OST''s. Our cluster uses all Gige and has about 608 nodes 1854 cores. We have allot of jobs that die, and/or go into high IO wait, strace shows processes stuck in fstat(). The big problem is (i think) I would like some feedback on it that of these 608 nodes 209 of them have in dmesg
2008 Feb 22
6
2.6.23 client systems with any compatible server
I want to have a lustre client running on a system with 2.6.23.12 kernel. (The reason is that there is a special patch that is required for these 60+ Quad-Core AMD Opteron systems that we have and the patch is currently only available for this 2.6.23.12 kernel). Does anyone have a recommendation of how I should get a client and then a compatible server? For the server, we only need minimal
2009 Jul 04
3
scaffolding
hi , i used the folllwing command to scaffold, G:\my\webblog>ruby script/generate scaffold webblog id:integer title:string body :text created_at:datetime after when i migrate with the follwing command rake db:migrate i got the error as (in G:/my/webblog) == 1 CreateWebblogs: migrating ================================================ -- create_table(:webblogs) rake aborted! Mysql::Error:
2010 Jul 03
2
patching FC13 kernel with Lustre
Hi, I am trying to merge Lustre into Fedora 13 kernel (2.6.33.3-85.fc13.i686.PAE) I plan to submit the patches and continue to develop features on this kernel. will I get any assistance from the lustre development team. Can I maintain this patch on lustre website ? Regards, Onkar -- **************************************************** A 1965 Ford Mustang is a great car. But if you want to go
2010 Jul 03
2
patching FC13 kernel with Lustre
Hi, I am trying to merge Lustre into Fedora 13 kernel (2.6.33.3-85.fc13.i686.PAE) I plan to submit the patches and continue to develop features on this kernel. will I get any assistance from the lustre development team. Can I maintain this patch on lustre website ? Regards, Onkar -- **************************************************** A 1965 Ford Mustang is a great car. But if you want to go
2010 Oct 13
2
New lustre-community mailing list
Hello All, I''d like to announce the creation of a new lustre-community at lists.lustre.org mailing list. After discussions with the various Lustre parties in the community, we thought there should be a new list to focus the "meta" discussion related to Lustre development in the community, such as how feature design, code development, patch contribution, and landing is
2010 Oct 13
2
New lustre-community mailing list
Hello All, I''d like to announce the creation of a new lustre-community at lists.lustre.org mailing list. After discussions with the various Lustre parties in the community, we thought there should be a new list to focus the "meta" discussion related to Lustre development in the community, such as how feature design, code development, patch contribution, and landing is
2010 Oct 13
2
New lustre-community mailing list
Hello All, I''d like to announce the creation of a new lustre-community at lists.lustre.org mailing list. After discussions with the various Lustre parties in the community, we thought there should be a new list to focus the "meta" discussion related to Lustre development in the community, such as how feature design, code development, patch contribution, and landing is
2013 Mar 18
1
lustre showing inactive devices
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows: [code] [root at MDS ~]# lctl list_nids 10.94.214.185 at tcp [root at MDS ~]# [/code] On Lustre Client1: [code] [root at lustreclient1 lustre]# lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0]
2008 Dec 22
2
Download lustre 1.6.6
i am unable to download lustre 1.6.6 , links does seems to be broken but its not working -- Regards-- Rishi Pathak Pune-Maharastra -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20081222/e72e8bc2/attachment-0001.html