Displaying 20 results from an estimated 1000 matches similar to: "Fallback"
2007 Mar 17
2
Fallback
Vladislav Vorobiev wrote:
> I dont no whats fallback-override.
<fallback-override> will transfer all listeners from the fallback
mountpoint back to this mountpoint when the source reconnects
Cheers
Thomas
2008 Mar 03
1
Quota setup fails because of OST ordering
Hi all,
after installing a Lustre test file system consisting of 34 OSTs, I
encountered a strange error when trying to set up quotas:
lfs quotacheck gave me an "Input/Output error", while in
/var/log/kern.log I found a Lustre error
LustreError: 20807:0:(quota_check.c:227:lov_quota_check()) lov idx 32
inactive
Indeed, in /proc/fs/lustre/lov/.../target_obd all 34 OSTs were listed
2013 Apr 29
1
OSTs inactive on one client (only)
Hi everyone,
I have seen this question here before, but without a very
satisfactory answer. One of our half a dozen clients has
lost access to a set of OSTs:
> lfs osts
OBDS::
0: lustre-OST0000_UUID ACTIVE
1: lustre-OST0001_UUID ACTIVE
2: lustre-OST0002_UUID INACTIVE
3: lustre-OST0003_UUID INACTIVE
4: lustre-OST0004_UUID INACTIVE
5: lustre-OST0005_UUID ACTIVE
6: lustre-OST0006_UUID ACTIVE
2010 Jul 13
4
Enable async journals
Hi all,
we use SLES 11 and Lustre 1.8.1.1 + patches and like convert a lustre FS
using external journals to one with async journals enabled.
Question is whether the procedure:
umount <filesystem> on all clients
umount <osts> on all OSSes
e2fsck <ost-device> on all OSSes for all all OSTs
tune2fs -O ^has_journal <ost-device> on all
2007 Nov 29
2
Balancing I/O Load
We are seeing some disturbing (probably due to our ignorance)
behavior from lustre 1.6.3 right now. We have 8 OSSs with 3 OSTs
per OSS (24 physical LUNs). We just created a brand new lustre file
system across this configuration using the default mkfs.lustre
formatting options. We have this file system mounted across 400
clients.
At the moment, we have 63 IOzone threads running
2007 Oct 01
1
fsck ldiskfs-backed OSTs?
There are references to running fsck on the lustre OSTs after a crash
or power failure. However, after downloading the ClusterFS
e2fsprogs and building them, e2fsck does not recognize our ldiskfs-
based OSTs. Is there a way to fsck the ldiskfs-based OSTs?
Thanks,
Charlie Taylor
UF HPC Center
2007 Nov 23
2
How to remove OST permanently?
All,
I''ve added a new 2.2 TB OST to my cluster easily enough, but this new
disk array is meant to replace several smaller OSTs that I used to have
of which were only 120 GB, 500 GB, and 700 GB.
Adding an OST is easy, but how do I REMOVE the small OSTs that I no
longer want to be part of my cluster? Is there a command to tell luster
to move all the file stripes off one of the nodes?
2013 May 27
1
Query on improving throughput
Dear All,
We have a small setup of lustre with 7 OSTs on 8gb FC . We have kept one
OST per FC port. We have lustre 2.3 with CentOS 6.3. There are 32 clients
which access this over FDR IB. We can achieve more than 1.3GB/s
throughput using IOR, without cache. Which is roughly 185MB/s per OST. We
wanted to know if this is normal. Should we expect more from 8gb FC port.
OSTs are on 8+2 RAID6 .
2008 Apr 15
4
NFS Performance
Hi,
With help from Oleg we got the right patches applied and NFS working
well. Maximum performance was about 60 MB/sec. Last week that dropped
to about 12.5 MB/sec and I cannot find a reason. Lustre clients all
obtain 100+ MB/sec on GigE. Each OST is good for 270 MB/sec. When
mounting the client on one of the OSSs I get 230 MB/sec. Seems the
speed is there. How can NFS and Lustre be tuned
2010 Jul 05
4
Adding OST to online Lustre with quota
Hello,
we wounder whether is it possible to add OSTs to the Lustre with
quota support without making it offline?
We tried to do this but all quota information was lost. Despite the fact
that OST was formatted with quota support
we are receiving this error message:
Lustre: 3743:0:(lproc_quota.c:447:lprocfs_quota_wr_type())
lustrefs-OST0016: quotaon failed because quota files
2010 Jul 21
1
Getting a list of files on down OST
Hi Guys,
I''m trying to figure out a way to get a list of files with objects
present on an OST that is down. Normally one could do:
lfs find -O <OST> dir
but that is giving us Input/output errors (I assume because the OST is
down). Is there a good way to get a list of objects (Maybe from the
MDS?), what OSTs they are on, and correlate them with files?
Thanks,
Mark
--
Mark
2007 Dec 13
4
Lustre drawback
Hello everybody,
at the following pages:
http://www.rit.edu/~rc/docs/Survey_of_Clustered_Parallel_File_Systems_004_LANL.ppt
http://www.intel.com/cd/ids/developer/asmo-na/eng/dc/tools/threading/238284.htm?page=2
I read:
"[...] Currently, one additional drawback to Lustre is that a Lustre
client cannot be on a server that is providing OSTs. This solution is
being worked on and may be
2012 Sep 27
4
Bad reporting inodes free
Hello,
When I run a "df -i" in my clients I get 95% indes used or 5% inodes free:
Filesystem Inodes
IUsed IFree IUse% Mounted on
lustre-mds-01:lustre-mds-02:/cetafs 22200087 20949839 1250248 95%
/mnt/data
But if I run lfs df -i i get:
UUID Inodes IUsed
IFree I
2010 Jul 08
5
No space left on device on not full filesystem
Hello,
We have running lustre 1.8.1 and have met "No space lest on device"
error when uploading 500 Gb small files (less then 100 Kb each).
The problem seems to depends on the number of files. If we remove one
file, we can create one new file, even with Gb size; but if we haven''t
remove something we can''t create even very little file, as an example
using touch
2013 Feb 12
2
Lost folders after changing MDS
OK, so our old MDS had hardware issues so I configured a new MGS / MDS on a VM (this is a backup lustre filesystem and I wanted to separate the MGS / MDS from OSS of the previous), and then did this:
For example:
mount -t ldiskfs /dev/old /mnt/ost_old
mount -t ldiskfs /dev/new /mnt/ost_new
rsync -aSv /mnt/ost_old/ /mnt/ost_new
# note trailing slash on ost_old/
If you are unable to connect both
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
I''m evaluating lustre. I''m trying what I think is a basic/simple
ethernet config. with MDT and OST on the same node. Can someone tell
me if the following (~150 second recovery occurring when small 190 GB
OST is re-mounted) is expected behavior or if I''m missing something?
I thought I would send this and continue with the eval while awaiting
a
response.
I''m using
2007 Nov 26
15
bad 1.6.3 striped write performance
Hi,
I''m seeing what can only be described as dismal striped write
performance from lustre 1.6.3 clients :-/
1.6.2 and 1.6.1 clients are fine. 1.6.4rc3 clients (from cvs a couple
of days ago) are also terrible.
the below shows that the OS (centos4.5/5) or fabric (gigE/IB) or lustre
version on the servers doesn''t matter - the problem is with the 1.6.3
and 1.6.4rc3 client kernels
2006 May 19
2
Limitation of storage size.
I want to config one 50T OST stroage, is it okay?
Since I know that the limitation of ext3 is 1XT. Thnaks
2012 Oct 19
6
Large Corosync/Pacemaker clusters
Hi,
We''re setting up fairly large Lustre 2.1.2 filesystems, each with 18
nodes and 159 resources all in one Corosync/Pacemaker cluster as
suggested by our vendor. We''re getting mixed messages on how large of a
Corosync/Pacemaker cluster will work well between our vendor an others.
1. Are there Lustre Corosync/Pacemaker clusters out there of this
size or larger?
2.
2010 Aug 12
3
How to track down a latency/timing problem
Hello Lustre Experts
I am trying to solve a problem with very slow "ls" and other big amount
of file operations but good overall read/write rates.
We are running a small cluster of 3 OSSs with 9 OSTs, 1MDS (with SSD
MDT) and currently two clients. All server nodes are centos 5.2 with
lustre 1.8.1 while the clients are centos 5.4 with lustre 1.8.3. All
components are networked with DDR