Displaying 20 results from an estimated 4000 matches similar to: "lfs_migrate question"
2013 Feb 12
2
Lost folders after changing MDS
OK, so our old MDS had hardware issues so I configured a new MGS / MDS on a VM (this is a backup lustre filesystem and I wanted to separate the MGS / MDS from OSS of the previous), and then did this:
For example:
mount -t ldiskfs /dev/old /mnt/ost_old
mount -t ldiskfs /dev/new /mnt/ost_new
rsync -aSv /mnt/ost_old/ /mnt/ost_new
# note trailing slash on ost_old/
If you are unable to connect both
2012 Sep 27
4
Bad reporting inodes free
Hello,
When I run a "df -i" in my clients I get 95% indes used or 5% inodes free:
Filesystem Inodes
IUsed IFree IUse% Mounted on
lustre-mds-01:lustre-mds-02:/cetafs 22200087 20949839 1250248 95%
/mnt/data
But if I run lfs df -i i get:
UUID Inodes IUsed
IFree I
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
Chris,
Perhaps you need to perform some write_conf like command. I''m not sure if this is needed in 1.6 or not.
Shane
----- Original Message -----
From: lustre-discuss-bounces at lists.lustre.org <lustre-discuss-bounces at lists.lustre.org>
To: lustre-discuss <lustre-discuss at lists.lustre.org>
Sent: Fri Mar 07 12:03:17 2008
Subject: Re: [Lustre-discuss] Multihomed
2007 Nov 23
2
How to remove OST permanently?
All,
I''ve added a new 2.2 TB OST to my cluster easily enough, but this new
disk array is meant to replace several smaller OSTs that I used to have
of which were only 120 GB, 500 GB, and 700 GB.
Adding an OST is easy, but how do I REMOVE the small OSTs that I no
longer want to be part of my cluster? Is there a command to tell luster
to move all the file stripes off one of the nodes?
2013 Oct 17
3
Speeding up configuration log regeneration?
Hi,
We run four-node Lustre 2.3, and I needed to both change hardware
under MGS/MDS and reassign an OSS ip. Just the same, I added a brand
new 10GE network to the system, which was the reason for MDS hardware
change.
I ran tunefs.lustre --writeconf as per chapter 14.4 in Lustre Manual,
and everything mounts fine. Log regeneration apparently works, since
it seems to do something, but
2013 Sep 19
0
Files written to an OST are corrupted
Hi, everyone,
I need some help in figuring out what may have happened here, as newly
created files on an OST are being corrupted. I don''t know if this
applies to all files written to this OST, or just to files of order 2GB
size, but files are definitely being corrupted, with no errors reported
by the OSS machine.
Let me describe the situation. We had been running Lustre 1.8.4 for
2013 Mar 18
1
lustre showing inactive devices
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows:
[code]
[root at MDS ~]# lctl list_nids
10.94.214.185 at tcp
[root at MDS ~]#
[/code]
On Lustre Client1:
[code]
[root at lustreclient1 lustre]# lfs df -h
UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6%
/mnt/lustre[MDT:0]
2013 Mar 18
1
OST0006 : inactive device
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows:
[code]
[root at MDS ~]# lctl list_nids
10.94.214.185 at tcp
[root at MDS ~]#
[/code]
On Lustre Client1:
[code]
[root at lustreclient1 lustre]# lfs df -h
UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0]
lustre-OST0000_UUID
2010 Jul 08
5
No space left on device on not full filesystem
Hello,
We have running lustre 1.8.1 and have met "No space lest on device"
error when uploading 500 Gb small files (less then 100 Kb each).
The problem seems to depends on the number of files. If we remove one
file, we can create one new file, even with Gb size; but if we haven''t
remove something we can''t create even very little file, as an example
using touch
2008 Apr 15
4
NFS Performance
Hi,
With help from Oleg we got the right patches applied and NFS working
well. Maximum performance was about 60 MB/sec. Last week that dropped
to about 12.5 MB/sec and I cannot find a reason. Lustre clients all
obtain 100+ MB/sec on GigE. Each OST is good for 270 MB/sec. When
mounting the client on one of the OSSs I get 230 MB/sec. Seems the
speed is there. How can NFS and Lustre be tuned
2013 Mar 11
4
Understanding lustre setup ..
Hello,
I have been reading
http://wiki.lustre.org/images/1/1b/Hadoop_wp_v0.4.2.pdf for setting up
Hadoop over lustre.
Generally in hadoop setup, we have 1 Namenode and various number of datanodes.
If I want to setup the same keeping Lustre as backend, in the document
it is mentioned that:
".............Our experiments run on cluster with 8 nodes in total,
one is mds/namenode, the rest are
2010 Sep 04
0
Set quota on Lustre system file client, reboots MDS/MGS node
Hi
I used lustre-1.8.3 for Centos5.4. I patched the kernel according to Lustre
1.8 operations manual.pdf.
I have a problem when I want to implement quota.
My cluster configuration is:
1. one MGS/MDS host (with two devices: sda and sdb,respectively)
with the following commands:
1) mkfs.lustre --mgs /dev/sda
2) mount -t lustre /dev/sda /mnt/mgt
3) mkfs.lustre --fsname=lustre
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
I''m evaluating lustre. I''m trying what I think is a basic/simple
ethernet config. with MDT and OST on the same node. Can someone tell
me if the following (~150 second recovery occurring when small 190 GB
OST is re-mounted) is expected behavior or if I''m missing something?
I thought I would send this and continue with the eval while awaiting
a
response.
I''m using
2008 Feb 05
2
lctl deactivate questions
Hi;
One of our OSTs filled up. Once we realized this,
we executed
lctl --device 9 deactivate
on our fs''s combo MDS/MGS machine.
We saw in the syslog that the OST in
question was deactivated:
Lustre: setting import ufhpc-OST0008_UUID INACTIVE by administrator request
However, ''lfs df'' on the clients does not show
that the OST is deactivated there, unless we *also*
2007 Nov 07
9
How To change server recovery timeout
Hi,
Our lustre environment is:
2.6.9-55.0.9.EL_lustre.1.6.3smp
I would like to change recovery timeout from default value 250s to
something longer
I tried example from manual:
set_timeout <secs> Sets the timeout (obd_timeout) for a server
to wait before failing recovery.
We performed that experiment on our test lustre installation with one
OST.
storage02 is our OSS
[root at
2007 Nov 16
5
Lustre Debug level
Hi,
Lustre manual 1.6 v18 says that that in production lustre debug level
should be set to fairly low. Manual also says that I can verify that
level by running following commands:
# sysctl portals.debug
This gives ne following error
error: ''portals.debug'' is an unknown key
cat /proc/sys/lnet/debug
gives output:
ioctl neterror warning error emerg ha config console
cat
2013 Apr 29
1
OSTs inactive on one client (only)
Hi everyone,
I have seen this question here before, but without a very
satisfactory answer. One of our half a dozen clients has
lost access to a set of OSTs:
> lfs osts
OBDS::
0: lustre-OST0000_UUID ACTIVE
1: lustre-OST0001_UUID ACTIVE
2: lustre-OST0002_UUID INACTIVE
3: lustre-OST0003_UUID INACTIVE
4: lustre-OST0004_UUID INACTIVE
5: lustre-OST0005_UUID ACTIVE
6: lustre-OST0006_UUID ACTIVE
2012 Nov 02
3
lctl ping of Pacemaker IP
Greetings!
I am working with Lustre-2.1.2 on RHEL 6.2. First I configured it
using the standard defaults over TCP/IP. Everything worked very
nicely usnig a real, static --mgsnode=a.b.c.x value which was the
actual IP of the MGS/MDS system1 node.
I am now trying to integrate it with Pacemaker-1.1.7. I believe I
have most of the set-up completed with a particular exception. The
"lctl
2008 Mar 03
1
Quota setup fails because of OST ordering
Hi all,
after installing a Lustre test file system consisting of 34 OSTs, I
encountered a strange error when trying to set up quotas:
lfs quotacheck gave me an "Input/Output error", while in
/var/log/kern.log I found a Lustre error
LustreError: 20807:0:(quota_check.c:227:lov_quota_check()) lov idx 32
inactive
Indeed, in /proc/fs/lustre/lov/.../target_obd all 34 OSTs were listed
2008 Jan 10
4
1.6.4.1 - active client evicted
Hi!
We''ve started to poke and prod at Lustre 1.6.4.1, and it seems to
mostly work (we haven''t had it OOPS on us yet like the earlier
1.6-versions did).
However, we had this weird incident where an active client (it was
copying 4GB files and running ls at the time) got evicted by the MDS
and all OST''s. After a while logs indicate that it did recover the
connection