similar to: Getting a list of files on down OST

Displaying 20 results from an estimated 7000 matches similar to: "Getting a list of files on down OST"

2008 Mar 03
1
Quota setup fails because of OST ordering
Hi all, after installing a Lustre test file system consisting of 34 OSTs, I encountered a strange error when trying to set up quotas: lfs quotacheck gave me an "Input/Output error", while in /var/log/kern.log I found a Lustre error LustreError: 20807:0:(quota_check.c:227:lov_quota_check()) lov idx 32 inactive Indeed, in /proc/fs/lustre/lov/.../target_obd all 34 OSTs were listed
2007 Nov 23
2
How to remove OST permanently?
All, I''ve added a new 2.2 TB OST to my cluster easily enough, but this new disk array is meant to replace several smaller OSTs that I used to have of which were only 120 GB, 500 GB, and 700 GB. Adding an OST is easy, but how do I REMOVE the small OSTs that I no longer want to be part of my cluster? Is there a command to tell luster to move all the file stripes off one of the nodes?
2010 Jul 08
5
No space left on device on not full filesystem
Hello, We have running lustre 1.8.1 and have met "No space lest on device" error when uploading 500 Gb small files (less then 100 Kb each). The problem seems to depends on the number of files. If we remove one file, we can create one new file, even with Gb size; but if we haven''t remove something we can''t create even very little file, as an example using touch
2013 Apr 29
1
OSTs inactive on one client (only)
Hi everyone, I have seen this question here before, but without a very satisfactory answer. One of our half a dozen clients has lost access to a set of OSTs: > lfs osts OBDS:: 0: lustre-OST0000_UUID ACTIVE 1: lustre-OST0001_UUID ACTIVE 2: lustre-OST0002_UUID INACTIVE 3: lustre-OST0003_UUID INACTIVE 4: lustre-OST0004_UUID INACTIVE 5: lustre-OST0005_UUID ACTIVE 6: lustre-OST0006_UUID ACTIVE
2013 Mar 18
1
lustre showing inactive devices
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows: [code] [root at MDS ~]# lctl list_nids 10.94.214.185 at tcp [root at MDS ~]# [/code] On Lustre Client1: [code] [root at lustreclient1 lustre]# lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0]
2013 Mar 18
1
OST0006 : inactive device
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows: [code] [root at MDS ~]# lctl list_nids 10.94.214.185 at tcp [root at MDS ~]# [/code] On Lustre Client1: [code] [root at lustreclient1 lustre]# lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0] lustre-OST0000_UUID
2008 Feb 05
2
lctl deactivate questions
Hi; One of our OSTs filled up. Once we realized this, we executed lctl --device 9 deactivate on our fs''s combo MDS/MGS machine. We saw in the syslog that the OST in question was deactivated: Lustre: setting import ufhpc-OST0008_UUID INACTIVE by administrator request However, ''lfs df'' on the clients does not show that the OST is deactivated there, unless we *also*
2012 Sep 27
4
Bad reporting inodes free
Hello, When I run a "df -i" in my clients I get 95% indes used or 5% inodes free: Filesystem Inodes IUsed IFree IUse% Mounted on lustre-mds-01:lustre-mds-02:/cetafs 22200087 20949839 1250248 95% /mnt/data But if I run lfs df -i i get: UUID Inodes IUsed IFree I
2010 Aug 11
3
lfs --obd discrepancy to lctl dl (1.8.3)
Hello, lfs prints different obd(idx) compared to lctl dl. We use single striping. cluster1 tmp # lfs find --obd scia-OST0017_UUID /data/scia/L0/V0.00/20100327/SCI_NL__0PNPDE20100327_193441_000040582088_00071_42209_1158.N1 /data/scia/L0/V0.00/20100327/SCI_NL__0PNPDE20100327_193441_000040582088_00071_42209_1158.N1 cluster1 tmp # lfs getstripe
2013 Feb 12
2
Lost folders after changing MDS
OK, so our old MDS had hardware issues so I configured a new MGS / MDS on a VM (this is a backup lustre filesystem and I wanted to separate the MGS / MDS from OSS of the previous), and then did this: For example: mount -t ldiskfs /dev/old /mnt/ost_old mount -t ldiskfs /dev/new /mnt/ost_new rsync -aSv /mnt/ost_old/ /mnt/ost_new # note trailing slash on ost_old/ If you are unable to connect both
2012 Oct 18
1
lfs_migrate question
Hi, I suffered an oss crash where my oss server had a cpu fault. I have it running again, but I am trying to decommission it. I am migrating the data off of it onto other ost''s using the lfs find command with lfs_migrate. It''s been nearly 36 hours and about 2 terabytes have been moved. This means I am about halfway. Is this a decent rate? Here are the particulars, which
2014 Nov 13
0
OST acting up
whoops, sent from wrong email address, form right address now: Hello, I am using Lustre 2.4.2 and have an OST that doesn't seem to be written to. When I check the MDS with 'lctl dl' I do not see that OST in the list. However when I check the OSS that OST belongs to I can see it is mounted and up; 0 UP osd-zfs l2-OST0003-osd l2-OST0003-osd_UUID 5 3 UP obdfilter l2-OST0003
2007 Nov 26
15
bad 1.6.3 striped write performance
Hi, I''m seeing what can only be described as dismal striped write performance from lustre 1.6.3 clients :-/ 1.6.2 and 1.6.1 clients are fine. 1.6.4rc3 clients (from cvs a couple of days ago) are also terrible. the below shows that the OS (centos4.5/5) or fabric (gigE/IB) or lustre version on the servers doesn''t matter - the problem is with the 1.6.3 and 1.6.4rc3 client kernels
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
Chris, Perhaps you need to perform some write_conf like command. I''m not sure if this is needed in 1.6 or not. Shane ----- Original Message ----- From: lustre-discuss-bounces at lists.lustre.org <lustre-discuss-bounces at lists.lustre.org> To: lustre-discuss <lustre-discuss at lists.lustre.org> Sent: Fri Mar 07 12:03:17 2008 Subject: Re: [Lustre-discuss] Multihomed
2007 Nov 29
2
Balancing I/O Load
We are seeing some disturbing (probably due to our ignorance) behavior from lustre 1.6.3 right now. We have 8 OSSs with 3 OSTs per OSS (24 physical LUNs). We just created a brand new lustre file system across this configuration using the default mkfs.lustre formatting options. We have this file system mounted across 400 clients. At the moment, we have 63 IOzone threads running
2004 Jan 11
3
Lustre 1.0.2 packages available
Greetings-- Packages for Lustre 1.0.2 are now available in the usual place http://www.clusterfs.com/download.html This bug-fix release resolves a number of issues, of which a few are user-visible: - the default debug level is now a more reasonable production value - zero-copy TCP is now enabled by default, if your hardware supports it - you should encounter fewer allocation failures
2004 Jan 11
3
Lustre 1.0.2 packages available
Greetings-- Packages for Lustre 1.0.2 are now available in the usual place http://www.clusterfs.com/download.html This bug-fix release resolves a number of issues, of which a few are user-visible: - the default debug level is now a more reasonable production value - zero-copy TCP is now enabled by default, if your hardware supports it - you should encounter fewer allocation failures
2010 Jul 05
4
Adding OST to online Lustre with quota
Hello, we wounder whether is it possible to add OSTs to the Lustre with quota support without making it offline? We tried to do this but all quota information was lost. Despite the fact that OST was formatted with quota support we are receiving this error message: Lustre: 3743:0:(lproc_quota.c:447:lprocfs_quota_wr_type()) lustrefs-OST0016: quotaon failed because quota files
2010 Sep 04
0
Set quota on Lustre system file client, reboots MDS/MGS node
Hi I used lustre-1.8.3 for Centos5.4. I patched the kernel according to Lustre 1.8 operations manual.pdf. I have a problem when I want to implement quota. My cluster configuration is: 1. one MGS/MDS host (with two devices: sda and sdb,respectively) with the following commands: 1) mkfs.lustre --mgs /dev/sda 2) mount -t lustre /dev/sda /mnt/mgt 3) mkfs.lustre --fsname=lustre
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
I''m evaluating lustre. I''m trying what I think is a basic/simple ethernet config. with MDT and OST on the same node. Can someone tell me if the following (~150 second recovery occurring when small 190 GB OST is re-mounted) is expected behavior or if I''m missing something? I thought I would send this and continue with the eval while awaiting a response. I''m using