Displaying 16 results from an estimated 16 matches for "mdt0000_uuid".
2008 Jan 10
4
1.6.4.1 - active client evicted
...DT0000; in progress operations using this service will fail.
Jan 10 12:40:38 LustreError: 7975:0:(mdc_locks.c:424:mdc_finish_enqueue()) ldlm_cli_enqueue: -5
Jan 10 12:40:38 LustreError: 7975:0:(client.c:519:ptlrpc_import_delay_req()) @@@ IMP_INVALID req at ffff8100c5298800 x649493/t0 o101->hpfs-MDT0000_UUID at 130.239.78.233@tcp:12 lens 432/912 ref 1 fl Rpc:/0/0 rc 0/0
Jan 10 12:40:38 LustreError: 7975:0:(mdc_locks.c:424:mdc_finish_enqueue()) ldlm_cli_enqueue: -108
Jan 10 12:41:40 LustreError: 7979:0:(client.c:519:ptlrpc_import_delay_req()) @@@ IMP_INVALID req at ffff81005ee12e00 x649521/t0 o101->...
2013 Mar 18
1
lustre showing inactive devices
...DS , 2 OSS/OST and 2 Lustre Client. My MDS shows:
[code]
[root at MDS ~]# lctl list_nids
10.94.214.185 at tcp
[root at MDS ~]#
[/code]
On Lustre Client1:
[code]
[root at lustreclient1 lustre]# lfs df -h
UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6%
/mnt/lustre[MDT:0]
lustre-OST0000_UUID 5.9G 276.1M 5.3G 5%
/mnt/lustre[OST:0]
lustre-OST0001_UUID 5.9G 276.1M 5.3G 5%
/mnt/lustre[OST:1]
lustre-OST0002_UUID 5.9G 276.1M 5.3G 5%
/mnt/lustre...
2013 Mar 18
1
OST0006 : inactive device
...installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows:
[code]
[root at MDS ~]# lctl list_nids
10.94.214.185 at tcp
[root at MDS ~]#
[/code]
On Lustre Client1:
[code]
[root at lustreclient1 lustre]# lfs df -h
UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0]
lustre-OST0000_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:0]
lustre-OST0001_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OST:1]
lustre-OST0002_UUID 5.9G 276.1M 5.3G 5% /mnt/lustre[OS...
2007 Nov 23
2
How to remove OST permanently?
All,
I''ve added a new 2.2 TB OST to my cluster easily enough, but this new
disk array is meant to replace several smaller OSTs that I used to have
of which were only 120 GB, 500 GB, and 700 GB.
Adding an OST is easy, but how do I REMOVE the small OSTs that I no
longer want to be part of my cluster? Is there a command to tell luster
to move all the file stripes off one of the nodes?
2010 Jul 08
5
No space left on device on not full filesystem
...eda:/mnt/lustre# dd bs=100M count=10 < /dev/zero > qqq
^C5+0 records in
5+0 records out
524288000 bytes (524 MB) copied, 7.66803 s, 68.4 MB/s
But there are a lot of free space both on OSTs and MDS:
[client]$ lfs df -h
UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 43.8G 813.4M 40.5G 1% /mnt/lustre[MDT:0]
lustre-OST0000_UUID 867.7G 45.8G 777.8G 5% /mnt/lustre[OST:0]
lustre-OST0001_UUID 916.9G 44.9G 825.4G 4% /mnt/lustre[OST:1]
lustre-OST0002_UUID 916.9G 44.9G 825.5G 4% /mnt/lustre[OST:2]
lustre-OST0003_...
2007 Dec 11
2
lustre + nfs + alphas
...g it.
On the nfs export server i see these messages--
Lustre: 4224:0:(o2iblnd_cb.c:412:kiblnd_handle_rx()) PUT_NACK from 192.168.64.70 at o2ib
LustreError: 4400:0:(client.c:969:ptlrpc_expire_one_request()) @@@ timeout (sent at 1197415542, 100s ago) req at ffff810827bfbc00 x38827/t0 o36->data-MDT0000_UUID at 192.168.64.70@o2ib:12 lens 14256/672 ref 1 fl Rpc:/0/0 rc 0/-22
Lustre: data-MDT0000-mdc-ffff81082d702000: Connection to service data-MDT0000 via nid 192.168.64.70 at o2ib was lost; in progress operations using this service
will wait for recovery to complete.
A trace of the hung nfs deamons rev...
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have
1 MDT/MGS and 1 OSS with 2 OST''s.
Our cluster uses all Gige and has about 608 nodes 1854 cores.
We have allot of jobs that die, and/or go into high IO wait, strace
shows processes stuck in fstat().
The big problem is (i think) I would like some feedback on it that of
these 608 nodes 209 of them have in dmesg
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
....c:325:class_setup()) setup
ddnlfs-MDT0000-mdc-0000010430934400 failed (-2)
LustreError: 11043:0:(obd_config.c:1062:class_config_llog_handler())
Err -2 on cfg command:
LustreError: 11141:0:(connection.c:142:ptlrpc_put_connection()) NULL connection
Lustre: cmd=cf003 0:ddnlfs-MDT0000-mdc 1:ddnlfs-MDT0000_UUID
2:36.121.255.201 at tcp
LustreError: 15c-8: MGC36.122.255.201 at o2ib: The configuration from log
''ddnlfs-client'' failed (-2). This may be the result of communication
errors between this node and the MGS, a bad configuration, or other
errors. See the syslog for more information.
L...
2012 Sep 27
4
Bad reporting inodes free
...Inodes
IUsed IFree IUse% Mounted on
lustre-mds-01:lustre-mds-02:/cetafs 22200087 20949839 1250248 95%
/mnt/data
But if I run lfs df -i i get:
UUID Inodes IUsed
IFree I Use% Mounted on
cetafs-MDT0000_UUID 975470592 20949223 954521369 2%
/mnt/data[MDT:0]
cetafs-OST0000_UUID 19073280 17822213 1251067 93%
/mnt/data[OST:0]
cetafs-OST0001_UUID 19073280 17822532 1250748 93%
/mnt/data[OST:1]
cetafs-OST0002_UUID 19073280 17822560 1250720 93%
/mnt/data[OST:2]...
2007 Nov 07
9
How To change server recovery timeout
...MGS node so I moved
then to MGS server called storage03
[root at storage03 ~]# lctl dl
0 UP mgs MGS MGS 9
1 UP mgc MGC10.143.245.3 at tcp f51a910b-a08e-4be6-5ada-b602a5ca9ab3 5
2 UP mdt MDS MDS_uuid 3
3 UP lov home-md-mdtlov home-md-mdtlov_UUID 4
4 UP mds home-md-MDT0000 home-md-MDT0000_UUID 5
5 UP osc home-md-OST0001-osc home-md-mdtlov_UUID 5
[root at storage03 ~]# lctl device 5
[root at storage03 ~]# lctl conf_param obd_timeout=600
error: conf_param: Function not implemented
[root at storage03 ~]# lctl --device 5 conf_param obd_timeout=600
error: conf_param: Function not implement...
2010 Sep 18
0
no failover with failover MDS
...: Reactivating import
Lustre: 14530:0:(client.c:1476:ptlrpc_expire_one_request()) @@@ Request
x1347247522447397 sent from gsilust-MDT0000-mdc-ffff81033d489400 to NID
10.12.115.120 at tcp 5s ago has timed out (5s prior to deadline).
req at ffff8103312da400 x1347247522447397/t0
o38->gsilust-MDT0000_UUID at 10.12.115.120@tcp:12/10 lens 368/584 e 0 to 1
dl 1284835365 ref 1 fl Rpc:N/0/0 rc 0/0
Obviously the clients stubbornly try to connect to the failed server,
10.12.115.120.
I''m sure the failover has worked before, since server A had its problems
last January, when the MDT was moved t...
2013 Feb 12
2
Lost folders after changing MDS
...OGS OBJECTS/*
on the new MDT partition.
I also upgraded from 1.8.8 to 2. I managed to mount the Lustre filesystem and if I do lfs df -h, I get:
NB> I deactivated those two OSTs below.
[root at mgs data]# lfs df -h
UUID bytes Used Available Use% Mounted on
AC3-MDT0000_UUID 37.5G 499.5M 34.5G 1% /data[MDT:0]
AC3-OST0000_UUID 16.4T 2.2T 13.3T 14% /data[OST:0]
AC3-OST0001_UUID 16.4T 1.8T 13.7T 12% /data[OST:1]
AC3-OST0002_UUID 6.4T 6.0T 49.2G 99% /data[OST:2]
AC3-OST0003_UUID...
2008 Jan 02
9
lustre quota problems
Hello,
I''ve several problems with quota on our testcluster:
When I set the quota for a person to a given value (e.g. the values which
are provided in the operations manual), I''m able to write exact the amount
which is set with setquota.
But when I delete the files(file) I''m not able to use this space again.
Here is what I''ve done in detail:
lfs checkquota
2013 Oct 17
3
Speeding up configuration log regeneration?
Hi,
We run four-node Lustre 2.3, and I needed to both change hardware
under MGS/MDS and reassign an OSS ip. Just the same, I added a brand
new 10GE network to the system, which was the reason for MDS hardware
change.
I ran tunefs.lustre --writeconf as per chapter 14.4 in Lustre Manual,
and everything mounts fine. Log regeneration apparently works, since
it seems to do something, but
2008 Feb 12
0
Lustre-discuss Digest, Vol 25, Issue 17
...>>> lock callback timer expired: evicting cl
>>>>>>>> ient
>>>>>>>> 2faf3c9e-26fb-64b7-ca6c-7c5b09374e67 at NET_0x200000aa4008d_UUID
>>>>>>>> nid 10.164.0.141 at tcp ns: mds-nobackup
>>>>>>>> -MDT0000_UUID lock: 00000100476df240/0xbc269e05c512de3a lrc:
>>>>>>>> 1/0,0 mode: CR/CR res: 11240142/324715850 bi
>>>>>>>> ts 0x5 rrc: 2 type: IBT flags: 20 remote: 0x4e54bc800174cd08
>>>>>>>> expref: 372 pid 26925
>>>>>>...
2007 Nov 26
15
bad 1.6.3 striped write performance
...1
/mnt/testfs/blah
obdidx objid objid group
1 3 0x3 0
0 2 0x2 0
% lfs df
UUID 1K-blocks Used Available Use% Mounted on
testfs-MDT0000_UUID 1534832 306680 1228152 19% /mnt/testfs[MDT:0]
testfs-OST0000_UUID 15481840 3803284 11678556 24% /mnt/testfs[OST:0]
testfs-OST0001_UUID 15481840 3803284 11678556 24% /mnt/testfs[OST:1]
filesystem summary: 30963680 7606568 23357112 24% /mnt/testfs
cheers,
robin
ps...