Displaying 20 results from an estimated 130 matches similar to: "Lustre 2.4 MDT: LustreError: Communicating with 0@lo: operation mds_connect failed with -11"
2007 Oct 22
0
The mds_connect operation failed with -11
Hi, list:
I''m trying configure lustre with:
1 MGS -------------> 192.168.3.100 with mkfs.lustre --mgs /dev/md1 ;
mount -t lustre ...
1 MDT ------------> 192.168.3.101 with mkfs.lustre --fsname=datafs00
--mdt --mgsnode=192.168.3.100 /dev/sda3 ; mount -t lustre ...
4 ost -----------> 192.168.3.102-104 with mkfs.lustre --fsname=datafs00
--ost --mgsnode=192.168.3.100 at tcp0
2007 Sep 28
0
llog_origin_handle_cancel and other LustreErrors
Hi again!
Same setup as before (Lustre 1.6.2 + 2.6.18 kernel).
This time things suddenly started to be very slow (as in periodically
stalling), and we found a bunch of llog_ LustreErrors on the MDS. Some
time later stuff had automagically recovered and is back to normal
speed.
Any idea on the meaning/cause of these errors?
What are the seriousness of "LustreError" errors in
2008 Feb 22
0
lustre error
Dear All,
Yesterday evening or cluster has stopped.
Two of our nodes tried to take the resource from each other, they
haven''t seen the other side, if I saw well.
I stopped heartbeat, resources, start it again, and back to online,
worked fine.
This morning I saw this in logs:
Feb 22 03:25:07 node4 kernel: Lustre:
7:0:(linux-debug.c:98:libcfs_run_upcall()) Invoked LNET upcall
2008 Jan 10
4
1.6.4.1 - active client evicted
Hi!
We''ve started to poke and prod at Lustre 1.6.4.1, and it seems to
mostly work (we haven''t had it OOPS on us yet like the earlier
1.6-versions did).
However, we had this weird incident where an active client (it was
copying 4GB files and running ls at the time) got evicted by the MDS
and all OST''s. After a while logs indicate that it did recover the
connection
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
Chris,
Perhaps you need to perform some write_conf like command. I''m not sure if this is needed in 1.6 or not.
Shane
----- Original Message -----
From: lustre-discuss-bounces at lists.lustre.org <lustre-discuss-bounces at lists.lustre.org>
To: lustre-discuss <lustre-discuss at lists.lustre.org>
Sent: Fri Mar 07 12:03:17 2008
Subject: Re: [Lustre-discuss] Multihomed
2010 Aug 11
0
OSS: IMP_CLOSED errors
Hello.
OS CentOS 5.4
uname -a
Linux oss0 2.6.18-128.7.1.el5_lustre.1.8.1.1 #1 SMP Tue Oct 6 05:48:57 MDT 2009 x86_64 x86_64 x86_64 GNU/Linux
Lustre 1.8.1.1
OSS server.
A lot of errors in /var/log/messages:
Aug 10 14:46:34 oss0 kernel: LustreError: 2802:0:(client.c:775:ptlrpc_import_delay_req()) Skipped 1 previous similar message
Aug 10 15:07:01 oss0 kernel: LustreError:
2007 Nov 07
1
ll_cfg_requeue process timeouts
Hi,
Our environment is: 2.6.9-55.0.9.EL_lustre.1.6.3smp
I am getting following errors from two OSS''s
...
Nov 7 10:39:51 storage09.beowulf.cluster kernel: LustreError:
23045:0:(client.c:519:ptlrpc_import_delay_req()) @@@ IMP_INVALID
req at 00000100b410be00 x4190687/t0 o101->MGS at MGC10.143.245.201@tcp_0:26
lens 232/240 ref 1 fl Rpc:/0/0 rc 0/0
Nov 7 10:39:51
2008 Jan 28
1
Questions on MDT inode size
Hi,
The documentation warns about using smaller inodes than 512 bytes on the
MDT. If I plan to use a stripecount of one (I have many small files), is
it possible to use an inode size of 256 bytes and still use in-inode EAs
for metadata ?
Thanks
/Jakob
2010 Jul 20
1
mdt backup tar --xattrs question
Greetings Group!
I hope this will be an easy one. To conserve steps in backing up the
metadata extended attributes of a Lustre mdt, I am looking at using a
newer version of tar combined with its --xattrs option. (Note:
Previously I have used the mdt two-step back-up from the Lustre Manual
and it has been successful.) If I can backup the extended attributes
via tar so that I don''t
2007 Oct 25
1
Error message
I''m seeing this error message on one of my OSS''s but not the other
three. Any idea what is causing it?
Oct 25 13:58:56 oss2 kernel: LustreError:
3228:0:(client.c:519:ptlrpc_import_delay_req()) @@@ IMP_INVALID
req at f6b13200 x18040/t0 o101->MGS at MGC192.168.0.200@tcp_0:26 lens 176/184
ref 1 fl Rpc:/0/0 rc 0/0
Oct 25 13:58:56 oss2 kernel: LustreError:
2014 May 26
0
Remove filesystem directories from MDT
Hello,
I have some problems in my filesystem. When I browse the filesystem from a client, a specific directory have directories that contain the same directories, in other words:
LustreFS-> dir A -> dir B -> dir B -> dir B -> dir B -> dir B…
This directory, and its children, has the same obdidx/objid:
[root@client vm-106-disk-1.raw]# lfs getstripe vm-106-disk-1.raw
2010 Jul 14
2
tunefs.lustre --print fails on mounted mdt/ost with mmp
Just checking to be sure this isn''t a known bug or problem. I couldn''t
find a bz for this, but it would appear that tunefs.lustre --print fails
on a lustre mdt or ost device if mounted with mmp.
Is this expected behavior?
TIA
mds1-gps:~ # tunefs.lustre --print /dev/mapper/mdt1
checking for existing Lustre data: not found
tunefs.lustre FATAL: Device /dev/mapper/mdt1 has not
2013 Apr 29
1
OSTs inactive on one client (only)
Hi everyone,
I have seen this question here before, but without a very
satisfactory answer. One of our half a dozen clients has
lost access to a set of OSTs:
> lfs osts
OBDS::
0: lustre-OST0000_UUID ACTIVE
1: lustre-OST0001_UUID ACTIVE
2: lustre-OST0002_UUID INACTIVE
3: lustre-OST0003_UUID INACTIVE
4: lustre-OST0004_UUID INACTIVE
5: lustre-OST0005_UUID ACTIVE
6: lustre-OST0006_UUID ACTIVE
2008 Mar 06
2
strange lustre errors
Hi,
On a few of the hpc cluster nodes, i am seeing a new lustre
error that is pasted below. The volumes are working fine and there
is nothing on the oss and mds to report.
LustreError: 5080:0:(import.c:607:ptlrpc_connect_interpret())
data3-OST0000_UUID at 192.168.2.98@tcp changed handle from
0xfe51139158c64fae to 0xfe511392a35878b3; copying, but this may
foreshadow disaster
2010 Aug 19
3
SSD caching of MDT
Article by Jeff Layton:
http://www.linux-mag.com/id/7839
anyone have views on whether this sort of caching would be useful for
the MDT? My feeling is that MDT reads are probably pretty random but
writes might benefit...?
GREG
--
Greg Matthews 01235 778658
Senior Computer Systems Administrator
Diamond Light Source, Oxfordshire, UK
2010 Sep 16
2
Lustre module not getting loaded in MDS
Hello All,
I have installed and configured Lustre 1.8.4 on SuSe 11.0 and everything
works fine if i run modprobe lustre and when the lustre module is getting
loaded. But when the server reboots it is not getting loaded. Kindly help.
Lnet is configured in /etc/modprobe.conf.local as below.
options lnet networks=tcp0(eth0) accept=all
For loading lustre module i tried including lustre module in
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have
1 MDT/MGS and 1 OSS with 2 OST''s.
Our cluster uses all Gige and has about 608 nodes 1854 cores.
We have allot of jobs that die, and/or go into high IO wait, strace
shows processes stuck in fstat().
The big problem is (i think) I would like some feedback on it that of
these 608 nodes 209 of them have in dmesg
2007 Nov 07
9
How To change server recovery timeout
Hi,
Our lustre environment is:
2.6.9-55.0.9.EL_lustre.1.6.3smp
I would like to change recovery timeout from default value 250s to
something longer
I tried example from manual:
set_timeout <secs> Sets the timeout (obd_timeout) for a server
to wait before failing recovery.
We performed that experiment on our test lustre installation with one
OST.
storage02 is our OSS
[root at
2008 Jun 17
4
maximum MDT inode count
For future filesystem compatibility, we are wondering if there are any
Lustre MDT filesystems in existence that have 2B or more total inodes?
This is fairly unlikely, because it would require an MDT filesystem
that is > 8TB in size (which isn''t even supported yet) and/or has been
formatted with specific options to increase the total number of inodes.
This can be checked with
2008 Jun 17
4
maximum MDT inode count
For future filesystem compatibility, we are wondering if there are any
Lustre MDT filesystems in existence that have 2B or more total inodes?
This is fairly unlikely, because it would require an MDT filesystem
that is > 8TB in size (which isn''t even supported yet) and/or has been
formatted with specific options to increase the total number of inodes.
This can be checked with