Displaying 20 results from an estimated 20000 matches similar to: "flash cache page"
2007 Nov 16
5
Lustre Debug level
Hi,
Lustre manual 1.6 v18 says that that in production lustre debug level
should be set to fairly low. Manual also says that I can verify that
level by running following commands:
# sysctl portals.debug
This gives ne following error
error: ''portals.debug'' is an unknown key
cat /proc/sys/lnet/debug
gives output:
ioctl neterror warning error emerg ha config console
cat
2013 Mar 18
1
lustre showing inactive devices
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows:
[code]
[root at MDS ~]# lctl list_nids
10.94.214.185 at tcp
[root at MDS ~]#
[/code]
On Lustre Client1:
[code]
[root at lustreclient1 lustre]# lfs df -h
UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6%
/mnt/lustre[MDT:0]
2013 Mar 18
1
OST0006 : inactive device
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows:
[code]
[root at MDS ~]# lctl list_nids
10.94.214.185 at tcp
[root at MDS ~]#
[/code]
On Lustre Client1:
[code]
[root at lustreclient1 lustre]# lfs df -h
UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0]
lustre-OST0000_UUID
2010 Sep 04
0
Set quota on Lustre system file client, reboots MDS/MGS node
Hi
I used lustre-1.8.3 for Centos5.4. I patched the kernel according to Lustre
1.8 operations manual.pdf.
I have a problem when I want to implement quota.
My cluster configuration is:
1. one MGS/MDS host (with two devices: sda and sdb,respectively)
with the following commands:
1) mkfs.lustre --mgs /dev/sda
2) mount -t lustre /dev/sda /mnt/mgt
3) mkfs.lustre --fsname=lustre
2013 Feb 12
2
Lost folders after changing MDS
OK, so our old MDS had hardware issues so I configured a new MGS / MDS on a VM (this is a backup lustre filesystem and I wanted to separate the MGS / MDS from OSS of the previous), and then did this:
For example:
mount -t ldiskfs /dev/old /mnt/ost_old
mount -t ldiskfs /dev/new /mnt/ost_new
rsync -aSv /mnt/ost_old/ /mnt/ost_new
# note trailing slash on ost_old/
If you are unable to connect both
2008 Jan 31
1
WBC subcomponents.
Hello
On Wed, 2008-01-23 at 00:10 +0300, Nikita Danilov wrote:
> Hello,
>
> below is a tentative list of tasks into which WBC effort can be
> sub-divided. I also provided a less exact list for the EPOCH component,
> and an incomplete list for the STL component.
>
> WBC tasks are estimated in lines-of-code with the total of (9100 + 3000)
> LOC, where LOC is a non-comment,
2014 Nov 13
0
OST acting up
whoops, sent from wrong email address, form right address now:
Hello,
I am using Lustre 2.4.2 and have an OST that doesn't seem to be written to.
When I check the MDS with 'lctl dl' I do not see that OST in the list.
However when I check the OSS that OST belongs to I can see it is mounted
and up;
0 UP osd-zfs l2-OST0003-osd l2-OST0003-osd_UUID 5
3 UP obdfilter l2-OST0003
2010 Jul 08
5
No space left on device on not full filesystem
Hello,
We have running lustre 1.8.1 and have met "No space lest on device"
error when uploading 500 Gb small files (less then 100 Kb each).
The problem seems to depends on the number of files. If we remove one
file, we can create one new file, even with Gb size; but if we haven''t
remove something we can''t create even very little file, as an example
using touch
2012 Sep 27
4
Bad reporting inodes free
Hello,
When I run a "df -i" in my clients I get 95% indes used or 5% inodes free:
Filesystem Inodes
IUsed IFree IUse% Mounted on
lustre-mds-01:lustre-mds-02:/cetafs 22200087 20949839 1250248 95%
/mnt/data
But if I run lfs df -i i get:
UUID Inodes IUsed
IFree I
2012 Oct 18
1
lfs_migrate question
Hi,
I suffered an oss crash where my oss server had a cpu fault. I have it running again, but I am trying to decommission it. I am migrating the data off of it onto other ost''s using the lfs find command with lfs_migrate.
It''s been nearly 36 hours and about 2 terabytes have been moved. This means I am about halfway. Is this a decent rate?
Here are the particulars, which
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
I''m evaluating lustre. I''m trying what I think is a basic/simple
ethernet config. with MDT and OST on the same node. Can someone tell
me if the following (~150 second recovery occurring when small 190 GB
OST is re-mounted) is expected behavior or if I''m missing something?
I thought I would send this and continue with the eval while awaiting
a
response.
I''m using
2010 Jul 30
2
lustre 1.8.3 upgrade observations
Hello,
1) when compiling the lustre modules for the server the ./configure script behaves a bit odd.
The --enable-server option is silently ignored when the kernel is not 100% patched.
Unfortunatly the build works for the server, but during the mount the error message claims about a missing "lustre" module which is loaded and running.
What is really missing are the ldiskfs et al
2008 Feb 05
2
lctl deactivate questions
Hi;
One of our OSTs filled up. Once we realized this,
we executed
lctl --device 9 deactivate
on our fs''s combo MDS/MGS machine.
We saw in the syslog that the OST in
question was deactivated:
Lustre: setting import ufhpc-OST0008_UUID INACTIVE by administrator request
However, ''lfs df'' on the clients does not show
that the OST is deactivated there, unless we *also*
2004 Jan 11
3
Lustre 1.0.2 packages available
Greetings--
Packages for Lustre 1.0.2 are now available in the usual place
http://www.clusterfs.com/download.html
This bug-fix release resolves a number of issues, of which a few are
user-visible:
- the default debug level is now a more reasonable production value
- zero-copy TCP is now enabled by default, if your hardware supports it
- you should encounter fewer allocation failures
2004 Jan 11
3
Lustre 1.0.2 packages available
Greetings--
Packages for Lustre 1.0.2 are now available in the usual place
http://www.clusterfs.com/download.html
This bug-fix release resolves a number of issues, of which a few are
user-visible:
- the default debug level is now a more reasonable production value
- zero-copy TCP is now enabled by default, if your hardware supports it
- you should encounter fewer allocation failures
2008 Jan 10
4
1.6.4.1 - active client evicted
Hi!
We''ve started to poke and prod at Lustre 1.6.4.1, and it seems to
mostly work (we haven''t had it OOPS on us yet like the earlier
1.6-versions did).
However, we had this weird incident where an active client (it was
copying 4GB files and running ls at the time) got evicted by the MDS
and all OST''s. After a while logs indicate that it did recover the
connection
2012 Nov 02
3
lctl ping of Pacemaker IP
Greetings!
I am working with Lustre-2.1.2 on RHEL 6.2. First I configured it
using the standard defaults over TCP/IP. Everything worked very
nicely usnig a real, static --mgsnode=a.b.c.x value which was the
actual IP of the MGS/MDS system1 node.
I am now trying to integrate it with Pacemaker-1.1.7. I believe I
have most of the set-up completed with a particular exception. The
"lctl
2008 Feb 12
0
Lustre-discuss Digest, Vol 25, Issue 17
Hi,
i just want to know whether there are any alternative file systems for HP SFS.
I heard that there is Cluster Gateway from Polyserve. Can anybody plz help me in finding more abt this Cluster Gateway.
Thanks and Regards,
Ashok Bharat
-----Original Message-----
From: lustre-discuss-bounces at lists.lustre.org on behalf of lustre-discuss-request at lists.lustre.org
Sent: Tue 2/12/2008 3:18 AM
2010 Sep 18
0
no failover with failover MDS
Hi all,
we have two servers A, B as a failover MGS/MDT pair, with IPs
A=10.12.112.28 and B=10.12.115.120 over tcp.
When server B crashes, MGS and MDT are mounted on A. Recovery times out
with only one out of 445 clients recovered.
Afterwards, the MDT lists all its OSTs as UP and in the logs of the OSTs
I see:
Lustre: MGC10.12.112.28 at tcp: Connection restored to service MGS using
nid
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
Chris,
Perhaps you need to perform some write_conf like command. I''m not sure if this is needed in 1.6 or not.
Shane
----- Original Message -----
From: lustre-discuss-bounces at lists.lustre.org <lustre-discuss-bounces at lists.lustre.org>
To: lustre-discuss <lustre-discuss at lists.lustre.org>
Sent: Fri Mar 07 12:03:17 2008
Subject: Re: [Lustre-discuss] Multihomed