Displaying 10 results from an estimated 10 matches for "ost0000".
Did you mean:
ost0001
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
...dmesg -c >dmesg.1
umount /mnt/datafs
umount /mnt/data/ost0
umount /mnt/data/mdt
e2label /dev/sda1
e2label /dev/sda2
dmesg -c >dmesg.2
mount.lustre /dev/sda1 /mnt/data/mdt
mount.lustre /dev/sda2 /mnt/data/ost0
dmesg -c >dmesg.3
while cat /proc/fs/lustre/obdfilter/datafs-OST0000/recovery_status \
| egrep ''RECOVERING|time remaining'';do sleep 30;done
mount.lustre pool4 at tcp:/datafs /mnt/datafs
}
aaa 2>&1 | tee aaa.0; dmesg -c >dmesg.4
The files dmesg.{0,1,2,3,4} and aaa.0 are available at:
http://fnapcf.fnal.gov/~ron/lustre-1.6.4.2-dm...
2008 Feb 12
0
Lustre-discuss Digest, Vol 25, Issue 17
...n machine. The only way to fix the hang is to reboot the
server. My users are getting extremely impatient :-/
I see this on the clients-
LustreError: 2814:0:(client.c:975:ptlrpc_expire_one_request()) @@@
timeout (sent at 1202756629, 301s ago) req at ffff8100af233600 x1796079/
t0 o6->data-OST0000_UUID at 192.168.64.71@o2ib:28 lens 336/336 ref 1 fl
Rpc:/0/0 rc 0/-22
Lustre: data-OST0000-osc-ffff810139ce4800: Connection to service data-
OST0000 via nid 192.168.64.71 at o2ib was lost; in progress operations
using this service will wait for recovery to complete.
LustreError: 11-0: an error...
2007 Nov 23
2
How to remove OST permanently?
All,
I''ve added a new 2.2 TB OST to my cluster easily enough, but this new
disk array is meant to replace several smaller OSTs that I used to have
of which were only 120 GB, 500 GB, and 700 GB.
Adding an OST is easy, but how do I REMOVE the small OSTs that I no
longer want to be part of my cluster? Is there a command to tell luster
to move all the file stripes off one of the nodes?
2008 Feb 14
9
how do you mount mountconf (i.e. 1.6) lustre on your servers?
As any of you using version 1.6 of Lustre knows, Lustre servers can now
be started simply my mounting the devices it is using. Even
an /etc/fstab entry can be used if you can have the mount delayed until
the network is started.
Given this change, you have also notices that we have eliminated the
initscript for Lustre that used to exist for releases prior to 1.6.
I''d like to take a
2008 Jan 10
4
1.6.4.1 - active client evicted
...attr_async fails: rc=-5
Jan 10 12:42:49 LustreError: 8001:0:(llite_lib.c:1480:ll_setattr_raw()) obd_setattr_async fails: rc=-5
Jan 10 12:42:50 LustreError: 11-0: an error occurred while communicating with 130.239.78.239 at tcp. The ost_setattr operation failed with -107
Jan 10 12:42:50 Lustre: hpfs-OST0000-osc-ffff8100016d2c00: Connection to service hpfs-OST0000 via nid 130.239.78.239 at tcp was lost; in progress operations using this service will wait for recovery to complete.
Jan 10 12:42:50 LustreError: 167-0: This client was evicted by hpfs-OST0000; in progress operations using this service will...
2013 Oct 17
3
Speeding up configuration log regeneration?
Hi,
We run four-node Lustre 2.3, and I needed to both change hardware
under MGS/MDS and reassign an OSS ip. Just the same, I added a brand
new 10GE network to the system, which was the reason for MDS hardware
change.
I ran tunefs.lustre --writeconf as per chapter 14.4 in Lustre Manual,
and everything mounts fine. Log regeneration apparently works, since
it seems to do something, but
2013 Oct 24
0
Re: [zfs-discuss] Problems getting Lustre started with ZFS
> You need to use unique index numbers for each OST, i.e. OST0000,
> OST00001, etc.
I cannot see how to control this? I am creating new OST''s but they are
all getting the same index number.
Could this be a problem with the mgs?
Thanks,
Andrew
>
> Ned
>
> To unsubscribe from this group and stop receiving emails from it, send an email t...
2013 Feb 12
2
Lost folders after changing MDS
.... I managed to mount the Lustre filesystem and if I do lfs df -h, I get:
NB> I deactivated those two OSTs below.
[root at mgs data]# lfs df -h
UUID bytes Used Available Use% Mounted on
AC3-MDT0000_UUID 37.5G 499.5M 34.5G 1% /data[MDT:0]
AC3-OST0000_UUID 16.4T 2.2T 13.3T 14% /data[OST:0]
AC3-OST0001_UUID 16.4T 1.8T 13.7T 12% /data[OST:1]
AC3-OST0002_UUID 6.4T 6.0T 49.2G 99% /data[OST:2]
AC3-OST0003_UUID 6.4T 6.1T 912.9M 100% /data[OST:3]
AC3-OST0004...
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have
1 MDT/MGS and 1 OSS with 2 OST''s.
Our cluster uses all Gige and has about 608 nodes 1854 cores.
We have allot of jobs that die, and/or go into high IO wait, strace
shows processes stuck in fstat().
The big problem is (i think) I would like some feedback on it that of
these 608 nodes 209 of them have in dmesg
2008 Mar 14
0
Help needed in Building lustre using pre-packaged releases
...s,
Don''t your Lustre volumes have a label on them?
On the one cluster I''ve got, the physical storage is shared with a number of
other systems, so the device information can change over time ... so I use
device labels in my /etc/fstab and friends.
Something like ''lustre-OST0000'', ''lustre-OST00001'' ... although when the
devices are actually mounted, they show up with their /dev node names.
Look through /proc/fs/lustre for Lustre volume names (they show up when
they''re mounted), and you can winnow your list down by mounting by name,
chec...