similar to: OST acting up

Displaying 20 results from an estimated 100 matches similar to: "OST acting up"

2013 Apr 29
1
OSTs inactive on one client (only)
Hi everyone, I have seen this question here before, but without a very satisfactory answer. One of our half a dozen clients has lost access to a set of OSTs: > lfs osts OBDS:: 0: lustre-OST0000_UUID ACTIVE 1: lustre-OST0001_UUID ACTIVE 2: lustre-OST0002_UUID INACTIVE 3: lustre-OST0003_UUID INACTIVE 4: lustre-OST0004_UUID INACTIVE 5: lustre-OST0005_UUID ACTIVE 6: lustre-OST0006_UUID ACTIVE
2013 Feb 12
2
Lost folders after changing MDS
OK, so our old MDS had hardware issues so I configured a new MGS / MDS on a VM (this is a backup lustre filesystem and I wanted to separate the MGS / MDS from OSS of the previous), and then did this: For example: mount -t ldiskfs /dev/old /mnt/ost_old mount -t ldiskfs /dev/new /mnt/ost_new rsync -aSv /mnt/ost_old/ /mnt/ost_new # note trailing slash on ost_old/ If you are unable to connect both
2008 Jan 10
4
1.6.4.1 - active client evicted
Hi! We''ve started to poke and prod at Lustre 1.6.4.1, and it seems to mostly work (we haven''t had it OOPS on us yet like the earlier 1.6-versions did). However, we had this weird incident where an active client (it was copying 4GB files and running ls at the time) got evicted by the MDS and all OST''s. After a while logs indicate that it did recover the connection
2008 Mar 06
2
strange lustre errors
Hi, On a few of the hpc cluster nodes, i am seeing a new lustre error that is pasted below. The volumes are working fine and there is nothing on the oss and mds to report. LustreError: 5080:0:(import.c:607:ptlrpc_connect_interpret()) data3-OST0000_UUID at 192.168.2.98@tcp changed handle from 0xfe51139158c64fae to 0xfe511392a35878b3; copying, but this may foreshadow disaster
2013 Mar 18
1
lustre showing inactive devices
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows: [code] [root at MDS ~]# lctl list_nids 10.94.214.185 at tcp [root at MDS ~]# [/code] On Lustre Client1: [code] [root at lustreclient1 lustre]# lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0]
2013 Mar 18
1
OST0006 : inactive device
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows: [code] [root at MDS ~]# lctl list_nids 10.94.214.185 at tcp [root at MDS ~]# [/code] On Lustre Client1: [code] [root at lustreclient1 lustre]# lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0] lustre-OST0000_UUID
2013 Oct 17
3
Speeding up configuration log regeneration?
Hi, We run four-node Lustre 2.3, and I needed to both change hardware under MGS/MDS and reassign an OSS ip. Just the same, I added a brand new 10GE network to the system, which was the reason for MDS hardware change. I ran tunefs.lustre --writeconf as per chapter 14.4 in Lustre Manual, and everything mounts fine. Log regeneration apparently works, since it seems to do something, but
2010 Jul 08
5
No space left on device on not full filesystem
Hello, We have running lustre 1.8.1 and have met "No space lest on device" error when uploading 500 Gb small files (less then 100 Kb each). The problem seems to depends on the number of files. If we remove one file, we can create one new file, even with Gb size; but if we haven''t remove something we can''t create even very little file, as an example using touch
2013 May 10
12
Interested in contributing to Lustre
Hi all, I am a grad student at Carnegie Mellon University. I had my course work in advanced storage systems in previous semester, and I am interested to work on Lustre. I prefer to take up a project that could be completed in a duration of a month or two. Since I am a novice w.r.t. my familiarity with Lustre code base, I seek your opinion to choose a project from the list:
2012 Sep 27
4
Bad reporting inodes free
Hello, When I run a "df -i" in my clients I get 95% indes used or 5% inodes free: Filesystem Inodes IUsed IFree IUse% Mounted on lustre-mds-01:lustre-mds-02:/cetafs 22200087 20949839 1250248 95% /mnt/data But if I run lfs df -i i get: UUID Inodes IUsed IFree I
2008 Feb 22
0
lustre error
Dear All, Yesterday evening or cluster has stopped. Two of our nodes tried to take the resource from each other, they haven''t seen the other side, if I saw well. I stopped heartbeat, resources, start it again, and back to online, worked fine. This morning I saw this in logs: Feb 22 03:25:07 node4 kernel: Lustre: 7:0:(linux-debug.c:98:libcfs_run_upcall()) Invoked LNET upcall
2010 Aug 06
1
Depreciated client still shown on OST exports
Some clients have been removed several weeks ago but are still listed in: ls -l /proc/fs/lustre/obdfilter/*/exports/ This was found after tracing back mystery tcp packets to the OSS. Although this is causing no damage, it raises the question of when former clients will be cleared from the OSS. Is there a way to manually remove these exports from the OSS? -- Regards, David
2007 Nov 26
1
parallelism across oss or ost?
if i have a single oss with multiple ost''s do i get any parallelism when striping across the ost''s? i.e. would a client form multiple connections on whatever transport between the itself the oss?
2010 Jul 21
1
Getting a list of files on down OST
Hi Guys, I''m trying to figure out a way to get a list of files with objects present on an OST that is down. Normally one could do: lfs find -O <OST> dir but that is giving us Input/output errors (I assume because the OST is down). Is there a good way to get a list of objects (Maybe from the MDS?), what OSTs they are on, and correlate them with files? Thanks, Mark -- Mark
2008 Mar 03
1
Quota setup fails because of OST ordering
Hi all, after installing a Lustre test file system consisting of 34 OSTs, I encountered a strange error when trying to set up quotas: lfs quotacheck gave me an "Input/Output error", while in /var/log/kern.log I found a Lustre error LustreError: 20807:0:(quota_check.c:227:lov_quota_check()) lov idx 32 inactive Indeed, in /proc/fs/lustre/lov/.../target_obd all 34 OSTs were listed
2010 Jul 14
2
tunefs.lustre --print fails on mounted mdt/ost with mmp
Just checking to be sure this isn''t a known bug or problem. I couldn''t find a bz for this, but it would appear that tunefs.lustre --print fails on a lustre mdt or ost device if mounted with mmp. Is this expected behavior? TIA mds1-gps:~ # tunefs.lustre --print /dev/mapper/mdt1 checking for existing Lustre data: not found tunefs.lustre FATAL: Device /dev/mapper/mdt1 has not
2013 Sep 19
0
Files written to an OST are corrupted
Hi, everyone, I need some help in figuring out what may have happened here, as newly created files on an OST are being corrupted. I don''t know if this applies to all files written to this OST, or just to files of order 2GB size, but files are definitely being corrupted, with no errors reported by the OSS machine. Let me describe the situation. We had been running Lustre 1.8.4 for
2010 Jul 05
4
Adding OST to online Lustre with quota
Hello, we wounder whether is it possible to add OSTs to the Lustre with quota support without making it offline? We tried to do this but all quota information was lost. Despite the fact that OST was formatted with quota support we are receiving this error message: Lustre: 3743:0:(lproc_quota.c:447:lprocfs_quota_wr_type()) lustrefs-OST0016: quotaon failed because quota files
2008 Jan 02
9
lustre quota problems
Hello, I''ve several problems with quota on our testcluster: When I set the quota for a person to a given value (e.g. the values which are provided in the operations manual), I''m able to write exact the amount which is set with setquota. But when I delete the files(file) I''m not able to use this space again. Here is what I''ve done in detail: lfs checkquota
2008 Dec 24
6
Bug when using /dev/cciss/c0d2 as mdt/ost
I am trying to build lustre-1.6.6 against the pre-patched kernel downloaded from SUN. But as written in Operations manual, it creates rpms for 2.6.18-92.1.10.el5_lustrecustom. Is there a way to ask it not to append custom as extraversion. Running kernel is 2.6.18-92.1.10.el5_lustre.1.6.6smp. -- Regards-- Rishi Pathak National PARAM Supercomputing Facility Center for Development of Advanced