Displaying 20 results from an estimated 6000 matches similar to: "Questions on MDT inode size"
2008 Jun 17
4
maximum MDT inode count
For future filesystem compatibility, we are wondering if there are any
Lustre MDT filesystems in existence that have 2B or more total inodes?
This is fairly unlikely, because it would require an MDT filesystem
that is > 8TB in size (which isn''t even supported yet) and/or has been
formatted with specific options to increase the total number of inodes.
This can be checked with
2008 Jun 17
4
maximum MDT inode count
For future filesystem compatibility, we are wondering if there are any
Lustre MDT filesystems in existence that have 2B or more total inodes?
This is fairly unlikely, because it would require an MDT filesystem
that is > 8TB in size (which isn''t even supported yet) and/or has been
formatted with specific options to increase the total number of inodes.
This can be checked with
2010 Aug 19
3
SSD caching of MDT
Article by Jeff Layton:
http://www.linux-mag.com/id/7839
anyone have views on whether this sort of caching would be useful for
the MDT? My feeling is that MDT reads are probably pretty random but
writes might benefit...?
GREG
--
Greg Matthews 01235 778658
Senior Computer Systems Administrator
Diamond Light Source, Oxfordshire, UK
2007 Nov 19
6
Dedicated MGS?
This may be in the documentation. If so, I missed it.
If a site has multiple Lustre file systems, the documentation implies
that there only needs to be a single MGS for an entire site
(regardless of the number of file systems). However, I also know
it is fairly common to have a combined MGS/MDT. So here are the
questions.
1. If we are going to have several Lustre file systems,
2012 Sep 27
4
Bad reporting inodes free
Hello,
When I run a "df -i" in my clients I get 95% indes used or 5% inodes free:
Filesystem Inodes
IUsed IFree IUse% Mounted on
lustre-mds-01:lustre-mds-02:/cetafs 22200087 20949839 1250248 95%
/mnt/data
But if I run lfs df -i i get:
UUID Inodes IUsed
IFree I
2010 Jul 20
1
mdt backup tar --xattrs question
Greetings Group!
I hope this will be an easy one. To conserve steps in backing up the
metadata extended attributes of a Lustre mdt, I am looking at using a
newer version of tar combined with its --xattrs option. (Note:
Previously I have used the mdt two-step back-up from the Lustre Manual
and it has been successful.) If I can backup the extended attributes
via tar so that I don''t
2010 Jul 14
2
tunefs.lustre --print fails on mounted mdt/ost with mmp
Just checking to be sure this isn''t a known bug or problem. I couldn''t
find a bz for this, but it would appear that tunefs.lustre --print fails
on a lustre mdt or ost device if mounted with mmp.
Is this expected behavior?
TIA
mds1-gps:~ # tunefs.lustre --print /dev/mapper/mdt1
checking for existing Lustre data: not found
tunefs.lustre FATAL: Device /dev/mapper/mdt1 has not
2012 Oct 09
1
MDS read-only
Dear all,
Two of our MDS have got repeatedly read-only error recently after once e2fsck on lustre 1.8.5. After the MDT mounted for a while, the kernel will reports errors like:
Oct 8 20:16:44 mainmds kernel: LDISKFS-fs error (device cciss!c0d1): ldiskfs_ext_check_inode: bad header/extent in inode #50736178: invalid magic - magic 0, entries 0, max 0(0), depth 0(0)
Oct 8 20:16:44 mainmds
2008 Dec 24
6
Bug when using /dev/cciss/c0d2 as mdt/ost
I am trying to build lustre-1.6.6 against the pre-patched kernel downloaded
from SUN.
But as written in Operations manual, it creates rpms for
2.6.18-92.1.10.el5_lustrecustom. Is there a way to ask it not to append
custom as extraversion.
Running kernel is 2.6.18-92.1.10.el5_lustre.1.6.6smp.
--
Regards--
Rishi Pathak
National PARAM Supercomputing Facility
Center for Development of Advanced
2010 Jul 08
5
No space left on device on not full filesystem
Hello,
We have running lustre 1.8.1 and have met "No space lest on device"
error when uploading 500 Gb small files (less then 100 Kb each).
The problem seems to depends on the number of files. If we remove one
file, we can create one new file, even with Gb size; but if we haven''t
remove something we can''t create even very little file, as an example
using touch
2013 Mar 18
1
OST0006 : inactive device
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows:
[code]
[root at MDS ~]# lctl list_nids
10.94.214.185 at tcp
[root at MDS ~]#
[/code]
On Lustre Client1:
[code]
[root at lustreclient1 lustre]# lfs df -h
UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0]
lustre-OST0000_UUID
2013 Dec 17
2
Setting up a lustre zfs dual mgs/mdt over tcp - help requested
Hi all,
Here is the situation:
I have 2 nodes MDS1 , MDS2 (10.0.0.22 , 10.0.0.23) I wish to use as
failover MGS, active/active MDT with zfs.
I have a jbod shelf with 12 disks, seen by both nodes as das (the
shelf has 2 sas ports, connected to a sas hba on each node), and I
am using lustre 2.4 on centos 6.4 x64
I have created 3 zfs pools:
1. mgs:
# zpool
2013 Feb 12
2
Lost folders after changing MDS
OK, so our old MDS had hardware issues so I configured a new MGS / MDS on a VM (this is a backup lustre filesystem and I wanted to separate the MGS / MDS from OSS of the previous), and then did this:
For example:
mount -t ldiskfs /dev/old /mnt/ost_old
mount -t ldiskfs /dev/new /mnt/ost_new
rsync -aSv /mnt/ost_old/ /mnt/ost_new
# note trailing slash on ost_old/
If you are unable to connect both
2013 Mar 18
1
lustre showing inactive devices
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows:
[code]
[root at MDS ~]# lctl list_nids
10.94.214.185 at tcp
[root at MDS ~]#
[/code]
On Lustre Client1:
[code]
[root at lustreclient1 lustre]# lfs df -h
UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6%
/mnt/lustre[MDT:0]
2008 Feb 19
32
storing SOM epoch in EA
Good day,
some time ago we discussed that it would be very helpful to
store epoch in inode on mds. the perfect solution could be
to store epoch in old inode body, but there is no much space
for this in the body and with DMU we''ll have this problem
again.
given the minimal inode size we use on MDS is 512 bytes, we
can store upto 13 stripes in the body. larger EAs go to a
dedicated block.
2007 Nov 23
2
How to remove OST permanently?
All,
I''ve added a new 2.2 TB OST to my cluster easily enough, but this new
disk array is meant to replace several smaller OSTs that I used to have
of which were only 120 GB, 500 GB, and 700 GB.
Adding an OST is easy, but how do I REMOVE the small OSTs that I no
longer want to be part of my cluster? Is there a command to tell luster
to move all the file stripes off one of the nodes?
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
Chris,
Perhaps you need to perform some write_conf like command. I''m not sure if this is needed in 1.6 or not.
Shane
----- Original Message -----
From: lustre-discuss-bounces at lists.lustre.org <lustre-discuss-bounces at lists.lustre.org>
To: lustre-discuss <lustre-discuss at lists.lustre.org>
Sent: Fri Mar 07 12:03:17 2008
Subject: Re: [Lustre-discuss] Multihomed
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
I''m evaluating lustre. I''m trying what I think is a basic/simple
ethernet config. with MDT and OST on the same node. Can someone tell
me if the following (~150 second recovery occurring when small 190 GB
OST is re-mounted) is expected behavior or if I''m missing something?
I thought I would send this and continue with the eval while awaiting
a
response.
I''m using
2007 Oct 15
3
iptables rules for lustre 1.6.x and MGS recovery procedures
Hi,
I would like to know what TCP/UDP ports should i keep open in my
firewall policies on my MGS server such that I can have my MGS server
fire-walled. Also if in a event of loss of MGT would it be possible
to recreate the MGT without loosing data or bringing the filesystem
down (i.e. by using cached information from MDT''s and OST''s)
Thanks
Anand
2013 Oct 17
3
Speeding up configuration log regeneration?
Hi,
We run four-node Lustre 2.3, and I needed to both change hardware
under MGS/MDS and reassign an OSS ip. Just the same, I added a brand
new 10GE network to the system, which was the reason for MDS hardware
change.
I ran tunefs.lustre --writeconf as per chapter 14.4 in Lustre Manual,
and everything mounts fine. Log regeneration apparently works, since
it seems to do something, but