Displaying 20 results from an estimated 6000 matches similar to: "maximum MDT inode count"
2010 Jul 14
2
tunefs.lustre --print fails on mounted mdt/ost with mmp
Just checking to be sure this isn''t a known bug or problem. I couldn''t
find a bz for this, but it would appear that tunefs.lustre --print fails
on a lustre mdt or ost device if mounted with mmp.
Is this expected behavior?
TIA
mds1-gps:~ # tunefs.lustre --print /dev/mapper/mdt1
checking for existing Lustre data: not found
tunefs.lustre FATAL: Device /dev/mapper/mdt1 has not
2008 Jan 28
1
Questions on MDT inode size
Hi,
The documentation warns about using smaller inodes than 512 bytes on the
MDT. If I plan to use a stripecount of one (I have many small files), is
it possible to use an inode size of 256 bytes and still use in-inode EAs
for metadata ?
Thanks
/Jakob
2010 Aug 19
3
SSD caching of MDT
Article by Jeff Layton:
http://www.linux-mag.com/id/7839
anyone have views on whether this sort of caching would be useful for
the MDT? My feeling is that MDT reads are probably pretty random but
writes might benefit...?
GREG
--
Greg Matthews 01235 778658
Senior Computer Systems Administrator
Diamond Light Source, Oxfordshire, UK
2010 Jul 20
1
mdt backup tar --xattrs question
Greetings Group!
I hope this will be an easy one. To conserve steps in backing up the
metadata extended attributes of a Lustre mdt, I am looking at using a
newer version of tar combined with its --xattrs option. (Note:
Previously I have used the mdt two-step back-up from the Lustre Manual
and it has been successful.) If I can backup the extended attributes
via tar so that I don''t
2012 Sep 27
4
Bad reporting inodes free
Hello,
When I run a "df -i" in my clients I get 95% indes used or 5% inodes free:
Filesystem Inodes
IUsed IFree IUse% Mounted on
lustre-mds-01:lustre-mds-02:/cetafs 22200087 20949839 1250248 95%
/mnt/data
But if I run lfs df -i i get:
UUID Inodes IUsed
IFree I
2013 Dec 17
2
Setting up a lustre zfs dual mgs/mdt over tcp - help requested
Hi all,
Here is the situation:
I have 2 nodes MDS1 , MDS2 (10.0.0.22 , 10.0.0.23) I wish to use as
failover MGS, active/active MDT with zfs.
I have a jbod shelf with 12 disks, seen by both nodes as das (the
shelf has 2 sas ports, connected to a sas hba on each node), and I
am using lustre 2.4 on centos 6.4 x64
I have created 3 zfs pools:
1. mgs:
# zpool
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
I''m evaluating lustre. I''m trying what I think is a basic/simple
ethernet config. with MDT and OST on the same node. Can someone tell
me if the following (~150 second recovery occurring when small 190 GB
OST is re-mounted) is expected behavior or if I''m missing something?
I thought I would send this and continue with the eval while awaiting
a
response.
I''m using
2007 Nov 19
6
Dedicated MGS?
This may be in the documentation. If so, I missed it.
If a site has multiple Lustre file systems, the documentation implies
that there only needs to be a single MGS for an entire site
(regardless of the number of file systems). However, I also know
it is fairly common to have a combined MGS/MDT. So here are the
questions.
1. If we are going to have several Lustre file systems,
2010 Jul 08
5
No space left on device on not full filesystem
Hello,
We have running lustre 1.8.1 and have met "No space lest on device"
error when uploading 500 Gb small files (less then 100 Kb each).
The problem seems to depends on the number of files. If we remove one
file, we can create one new file, even with Gb size; but if we haven''t
remove something we can''t create even very little file, as an example
using touch
2013 Feb 12
2
Lost folders after changing MDS
OK, so our old MDS had hardware issues so I configured a new MGS / MDS on a VM (this is a backup lustre filesystem and I wanted to separate the MGS / MDS from OSS of the previous), and then did this:
For example:
mount -t ldiskfs /dev/old /mnt/ost_old
mount -t ldiskfs /dev/new /mnt/ost_new
rsync -aSv /mnt/ost_old/ /mnt/ost_new
# note trailing slash on ost_old/
If you are unable to connect both
2008 Dec 24
6
Bug when using /dev/cciss/c0d2 as mdt/ost
I am trying to build lustre-1.6.6 against the pre-patched kernel downloaded
from SUN.
But as written in Operations manual, it creates rpms for
2.6.18-92.1.10.el5_lustrecustom. Is there a way to ask it not to append
custom as extraversion.
Running kernel is 2.6.18-92.1.10.el5_lustre.1.6.6smp.
--
Regards--
Rishi Pathak
National PARAM Supercomputing Facility
Center for Development of Advanced
2008 Feb 22
6
2.6.23 client systems with any compatible server
I want to have a lustre client running on a system with 2.6.23.12
kernel. (The reason is that there is a special patch that is required
for these 60+ Quad-Core AMD Opteron systems that we have and the patch
is currently only available for this 2.6.23.12 kernel).
Does anyone have a recommendation of how I should get a client and
then a compatible server?
For the server, we only need minimal
2012 Oct 09
1
MDS read-only
Dear all,
Two of our MDS have got repeatedly read-only error recently after once e2fsck on lustre 1.8.5. After the MDT mounted for a while, the kernel will reports errors like:
Oct 8 20:16:44 mainmds kernel: LDISKFS-fs error (device cciss!c0d1): ldiskfs_ext_check_inode: bad header/extent in inode #50736178: invalid magic - magic 0, entries 0, max 0(0), depth 0(0)
Oct 8 20:16:44 mainmds
2013 Apr 16
2
UID/GID access control in Lustre
Hello list members,
I started to develop a kernel module which hooks into Lustre 2.3 for
controlling data access based on nid and uid/gid. The background
is the following: Here at GSI we have currently a reserved uid/gid space
which partner institutes are using to access our exported Lustre mounts.
However, we currently have no mechanism to control (guaranty) that the
reserved uid/gid space are
2010 Aug 12
3
How to track down a latency/timing problem
Hello Lustre Experts
I am trying to solve a problem with very slow "ls" and other big amount
of file operations but good overall read/write rates.
We are running a small cluster of 3 OSSs with 9 OSTs, 1MDS (with SSD
MDT) and currently two clients. All server nodes are centos 5.2 with
lustre 1.8.1 while the clients are centos 5.4 with lustre 1.8.3. All
components are networked with DDR
2010 Sep 09
1
What's the correct sequence to umount multiple lustre file system
Any recommendation about the sequence to umount multiple lustre file system with combined MGS/MDT or separate MGS, MDT. Thanks.
Ming
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20100909/396905b5/attachment.html
2013 Mar 18
1
OST0006 : inactive device
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows:
[code]
[root at MDS ~]# lctl list_nids
10.94.214.185 at tcp
[root at MDS ~]#
[/code]
On Lustre Client1:
[code]
[root at lustreclient1 lustre]# lfs df -h
UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0]
lustre-OST0000_UUID
2007 Mar 20
15
How to bypass failed OST without blocking?
Hi
I want my lustre do such things during OST failed: if some file
has stripe data on th failed OST, any operation on the file will
return IO error without blocking, and also at this moment I can
create and read/write new file or read/write files which have no stripe
data on the failed OST without blocking.
What should I do ? How to configure?
thanks!
swin
-------------- next part
2007 Nov 12
8
More failover issues
In 1.6.0, when creating a MDT, you could specify multiple --mgsnode options
and it would failover between them. 1.6.3 only seems to take the last one
and --mgsnode=192.168.1.252 at o2ib:192.168.1.253 at o2ib doesn''t seem to failover
to the other node. Any ideas how to get around this?
Robert
Robert LeBlanc
College of Life Sciences Computer Support
Brigham Young University
leblanc at
2010 Sep 10
11
Large directory performance
We have been struggling with our Lustre performance for some time now especially with large directories. I recently did some informal benchmarking (on a live system so I know results are not scientifically valid) and noticed a huge drop in performance of reads(stat operations) past 20k files in a single directory. I''m using bonnie++, disabling IO testing (-s 0) and just creating, reading,