search for: ost0000_uuid

Displaying 14 results from an estimated 14 matches for "ost0000_uuid".

Did you mean: ost0001_uuid
2007 Nov 23
2
How to remove OST permanently?
All, I''ve added a new 2.2 TB OST to my cluster easily enough, but this new disk array is meant to replace several smaller OSTs that I used to have of which were only 120 GB, 500 GB, and 700 GB. Adding an OST is easy, but how do I REMOVE the small OSTs that I no longer want to be part of my cluster? Is there a command to tell luster to move all the file stripes off one of the nodes?
2008 Mar 06
2
strange lustre errors
Hi, On a few of the hpc cluster nodes, i am seeing a new lustre error that is pasted below. The volumes are working fine and there is nothing on the oss and mds to report. LustreError: 5080:0:(import.c:607:ptlrpc_connect_interpret()) data3-OST0000_UUID at 192.168.2.98@tcp changed handle from 0xfe51139158c64fae to 0xfe511392a35878b3; copying, but this may foreshadow disaster LustreError: 5080:0:(import.c:607:ptlrpc_connect_interpret()) data4-OST0000_UUID at 192.168.2.98@tcp changed handle from 0xfe51139158c6502c to 0xfe511392a35878c1; copying,...
2013 Mar 18
1
lustre showing inactive devices
...l list_nids 10.94.214.185 at tcp [root at MDS ~]# [/code] On Lustre Client1: [code] [root at lustreclient1 lustre]# lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0] lustre-OST0000_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:0] lustre-OST0001_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:1] lustre-OST0002_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:2] lustre-OST0003_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre...
2013 Mar 18
1
OST0006 : inactive device
...DS ~]# lctl list_nids 10.94.214.185 at tcp [root at MDS ~]# [/code] On Lustre Client1: [code] [root at lustreclient1 lustre]# lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0] lustre-OST0000_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:0] lustre-OST0001_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:1] lustre-OST0002_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:2] lustre-OST0003_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OS...
2010 Jul 08
5
No space left on device on not full filesystem
...n 5+0 records out 524288000 bytes (524 MB) copied, 7.66803 s, 68.4 MB/s But there are a lot of free space both on OSTs and MDS: [client]$ lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 43.8G 813.4M 40.5G 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 867.7G 45.8G 777.8G 5% /mnt/lustre[OST:0] lustre-OST0001_UUID 916.9G 44.9G 825.4G 4% /mnt/lustre[OST:1] lustre-OST0002_UUID 916.9G 44.9G 825.5G 4% /mnt/lustre[OST:2] lustre-OST0003_UUID 916.9G 44.9G 825.4G 4% /mnt/lustre[OST:3] lustre-OST0004_...
2013 Apr 29
1
OSTs inactive on one client (only)
Hi everyone, I have seen this question here before, but without a very satisfactory answer. One of our half a dozen clients has lost access to a set of OSTs: > lfs osts OBDS:: 0: lustre-OST0000_UUID ACTIVE 1: lustre-OST0001_UUID ACTIVE 2: lustre-OST0002_UUID INACTIVE 3: lustre-OST0003_UUID INACTIVE 4: lustre-OST0004_UUID INACTIVE 5: lustre-OST0005_UUID ACTIVE 6: lustre-OST0006_UUID ACTIVE All OSTs show as completely fine on the other clients, and the system is working there. In addition, I ha...
2012 Sep 27
4
Bad reporting inodes free
...s-01:lustre-mds-02:/cetafs 22200087 20949839 1250248 95% /mnt/data But if I run lfs df -i i get: UUID Inodes IUsed IFree I Use% Mounted on cetafs-MDT0000_UUID 975470592 20949223 954521369 2% /mnt/data[MDT:0] cetafs-OST0000_UUID 19073280 17822213 1251067 93% /mnt/data[OST:0] cetafs-OST0001_UUID 19073280 17822532 1250748 93% /mnt/data[OST:1] cetafs-OST0002_UUID 19073280 17822560 1250720 93% /mnt/data[OST:2] cetafs-OST0003_UUID 19073280 17822622 1250658 93% /mnt/data[OST:3]...
2008 Feb 12
0
Lustre-discuss Digest, Vol 25, Issue 17
...n machine. The only way to fix the hang is to reboot the server. My users are getting extremely impatient :-/ I see this on the clients- LustreError: 2814:0:(client.c:975:ptlrpc_expire_one_request()) @@@ timeout (sent at 1202756629, 301s ago) req at ffff8100af233600 x1796079/ t0 o6->data-OST0000_UUID at 192.168.64.71@o2ib:28 lens 336/336 ref 1 fl Rpc:/0/0 rc 0/-22 Lustre: data-OST0000-osc-ffff810139ce4800: Connection to service data- OST0000 via nid 192.168.64.71 at o2ib was lost; in progress operations using this service will wait for recovery to complete. LustreError: 11-0: an error occu...
2013 Oct 17
3
Speeding up configuration log regeneration?
Hi, We run four-node Lustre 2.3, and I needed to both change hardware under MGS/MDS and reassign an OSS ip. Just the same, I added a brand new 10GE network to the system, which was the reason for MDS hardware change. I ran tunefs.lustre --writeconf as per chapter 14.4 in Lustre Manual, and everything mounts fine. Log regeneration apparently works, since it seems to do something, but
2007 Nov 26
15
bad 1.6.3 striped write performance
...ything relevant in bugzilla. is anyone else seeing this? seems weird that 1.6.3 has been out there for a while and nobody else has reported it, but I can''t think or any more testing variants I can try... anyway, some more simple setup info: % lfs getstripe /mnt/testfs/ OBDS: 0: testfs-OST0000_UUID ACTIVE 1: testfs-OST0001_UUID ACTIVE /mnt/testfs/ default stripe_count: -1 stripe_size: 1048576 stripe_offset: -1 /mnt/testfs/blah obdidx objid objid group 1 3 0x3 0 0 2...
2008 Jan 10
4
1.6.4.1 - active client evicted
...2 previous similar messages Jan 10 12:52:21 Lustre: hpfs-MDT0000-mdc-ffff8100016d2c00: Connection restored to service hpfs-MDT0000 using nid 130.239.78.233 at tcp. Jan 10 12:57:46 Lustre: setting import hpfs-MDT0000_UUID INACTIVE by administrator request Jan 10 12:59:26 Lustre: setting import hpfs-OST0000_UUID INACTIVE by administrator request ----------------------------8<------------------------ Logs from the MGS/MDT: ----------------------------8<------------------------ Jan 10 12:20:31 Lustre: MGS: haven''t heard from client 01a8bcfc-fd98-90a9-6aeb-7c331a658b2e (at 130.239.78.238 at tc...
2008 Jan 02
9
lustre quota problems
Hello, I''ve several problems with quota on our testcluster: When I set the quota for a person to a given value (e.g. the values which are provided in the operations manual), I''m able to write exact the amount which is set with setquota. But when I delete the files(file) I''m not able to use this space again. Here is what I''ve done in detail: lfs checkquota
2013 Feb 12
2
Lost folders after changing MDS
.... I managed to mount the Lustre filesystem and if I do lfs df -h, I get: NB> I deactivated those two OSTs below. [root at mgs data]# lfs df -h UUID bytes Used Available Use% Mounted on AC3-MDT0000_UUID 37.5G 499.5M 34.5G 1% /data[MDT:0] AC3-OST0000_UUID 16.4T 2.2T 13.3T 14% /data[OST:0] AC3-OST0001_UUID 16.4T 1.8T 13.7T 12% /data[OST:1] AC3-OST0002_UUID 6.4T 6.0T 49.2G 99% /data[OST:2] AC3-OST0003_UUID 6.4T 6.1T 912.9M 100% /data[OST:3] AC3-OST0004_UUID...
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have 1 MDT/MGS and 1 OSS with 2 OST''s. Our cluster uses all Gige and has about 608 nodes 1854 cores. We have allot of jobs that die, and/or go into high IO wait, strace shows processes stuck in fstat(). The big problem is (i think) I would like some feedback on it that of these 608 nodes 209 of them have in dmesg