similar to: how do you mount mountconf (i.e. 1.6) lustre on your servers?

Displaying 20 results from an estimated 1000 matches similar to: "how do you mount mountconf (i.e. 1.6) lustre on your servers?"

2007 Nov 23
2
How to remove OST permanently?
All, I''ve added a new 2.2 TB OST to my cluster easily enough, but this new disk array is meant to replace several smaller OSTs that I used to have of which were only 120 GB, 500 GB, and 700 GB. Adding an OST is easy, but how do I REMOVE the small OSTs that I no longer want to be part of my cluster? Is there a command to tell luster to move all the file stripes off one of the nodes?
2007 Mar 20
15
How to bypass failed OST without blocking?
Hi I want my lustre do such things during OST failed: if some file has stripe data on th failed OST, any operation on the file will return IO error without blocking, and also at this moment I can create and read/write new file or read/write files which have no stripe data on the failed OST without blocking. What should I do ? How to configure? thanks! swin -------------- next part
2013 Oct 17
3
Speeding up configuration log regeneration?
Hi, We run four-node Lustre 2.3, and I needed to both change hardware under MGS/MDS and reassign an OSS ip. Just the same, I added a brand new 10GE network to the system, which was the reason for MDS hardware change. I ran tunefs.lustre --writeconf as per chapter 14.4 in Lustre Manual, and everything mounts fine. Log regeneration apparently works, since it seems to do something, but
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
I''m evaluating lustre. I''m trying what I think is a basic/simple ethernet config. with MDT and OST on the same node. Can someone tell me if the following (~150 second recovery occurring when small 190 GB OST is re-mounted) is expected behavior or if I''m missing something? I thought I would send this and continue with the eval while awaiting a response. I''m using
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have 1 MDT/MGS and 1 OSS with 2 OST''s. Our cluster uses all Gige and has about 608 nodes 1854 cores. We have allot of jobs that die, and/or go into high IO wait, strace shows processes stuck in fstat(). The big problem is (i think) I would like some feedback on it that of these 608 nodes 209 of them have in dmesg
2007 Nov 07
9
How To change server recovery timeout
Hi, Our lustre environment is: 2.6.9-55.0.9.EL_lustre.1.6.3smp I would like to change recovery timeout from default value 250s to something longer I tried example from manual: set_timeout <secs> Sets the timeout (obd_timeout) for a server to wait before failing recovery. We performed that experiment on our test lustre installation with one OST. storage02 is our OSS [root at
2008 Jan 10
4
1.6.4.1 - active client evicted
Hi! We''ve started to poke and prod at Lustre 1.6.4.1, and it seems to mostly work (we haven''t had it OOPS on us yet like the earlier 1.6-versions did). However, we had this weird incident where an active client (it was copying 4GB files and running ls at the time) got evicted by the MDS and all OST''s. After a while logs indicate that it did recover the connection
2010 Aug 06
1
Depreciated client still shown on OST exports
Some clients have been removed several weeks ago but are still listed in: ls -l /proc/fs/lustre/obdfilter/*/exports/ This was found after tracing back mystery tcp packets to the OSS. Although this is causing no damage, it raises the question of when former clients will be cleared from the OSS. Is there a way to manually remove these exports from the OSS? -- Regards, David
2013 Feb 12
2
Lost folders after changing MDS
OK, so our old MDS had hardware issues so I configured a new MGS / MDS on a VM (this is a backup lustre filesystem and I wanted to separate the MGS / MDS from OSS of the previous), and then did this: For example: mount -t ldiskfs /dev/old /mnt/ost_old mount -t ldiskfs /dev/new /mnt/ost_new rsync -aSv /mnt/ost_old/ /mnt/ost_new # note trailing slash on ost_old/ If you are unable to connect both
2007 Nov 06
4
Checksum Algorithm
Hi, We have seen a huge performance drop in 1.6.3, due to the checksum being enabled by default. I looked at the algorithm being used, and it is actually a CRC32, which is a very strong algorithm for detecting all sorts of problems, such as single bit errors, swapped bytes, and missing bytes. I''ve been experimenting with using a simple XOR algorithm. I''ve been able to recover
2008 Mar 26
3
HW experience
Hi, we would like to establish a small Lustre instance and for the OST planning to use standard Dell PE1950 servers (2x QuadCore + 16 GB Ram) and for the disk a JBOD (MD1000) steered by the PE1950 internal Raid controller (Raid-6). Any experience (good or bad) with such a config ? thanxs, Martin
2008 Mar 11
2
Problems mountine lustre thru an ib2ip gateway
Hello, I am trying to mount a lustre filesystem thru an ib2ip gateway. The MDS''s have infiniband connections. The client nodes are tcp/ip connections. I am able to route between the client nodes and the MDS''s. I have the following in /etc/fstab: abe-mds1 at o2ib0,abe-mds2 at o2ib0:/home/client /abehome lustre _netdev,flock 0 0 I get the following when trying
2008 Jan 15
19
How do you make an MGS/OSS listen on 2 NICs?
I am running on CentOS 5 distribution without adding any updates from CentOS. I am using the lustre 1.6.4.1 kernel and software. I have two NICs that run though different switches. I have the lustre options in my modprobe.conf to look like this: options lnet networks=tcp0(eth1,eth0) My MGS seems to be only listening on the first interface however. When I try and ping the 1st interface (eth1)
2007 Dec 14
1
evicting clients when shutdown cleanly?
Should I be seeing messages like: Dec 14 12:06:59 nyx170 kernel: Lustre: MGS: haven''t heard from client dadccfac-8610-06e7-9c02-90e552694947 (at 141.212.30.185 at tcp) in 234 seconds. I think it''s dead, and I am evicting it. when the client was shut down cleanly? and the lustre file system is mounted via /etc/fstab ? The file system (i would hope) would be unmounted
2007 Sep 04
1
(fwd) Bug#440721: FTBFS on sparc while linking usr/klibc/libc.so
new klibc sparc build failure against gcc 4.2 ----- Forwarded message from Kilian Krause <kilian at debian.org> ----- Subject: Bug#440721: FTBFS on sparc while linking usr/klibc/libc.so From: Kilian Krause <kilian at debian.org> To: Debian Bug Tracking System <submit at bugs.debian.org> Date: Mon, 03 Sep 2007 23:35:23 +0200 Package: klibc Version: 1.5.6-2 Severity: serious
2010 Aug 17
3
Total Harmonic Distortion THD
Hi Has anybody done THD or THD-N measurements with the CELT Codec (bext would be on various bit rates) If someone could share results for Mono at 64kBit and Stereo at 128 and 196kBit it would be great. thank you very much Jochen -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.xiph.org/pipermail/opus/attachments/20100817/b1da17cc/attachment-0002.htm
2013 Apr 10
5
[Bug 9783] New: please don't use client-server model for local copies
https://bugzilla.samba.org/show_bug.cgi?id=9783 Summary: please don't use client-server model for local copies Product: rsync Version: 3.0.9 Platform: All URL: http://lwn.net/Articles/400489/ OS/Version: Linux Status: NEW Severity: enhancement Priority: P5 Component: core
2006 Jan 12
1
Problem with NLSYSTEMFIT()
Hello, I want to solve a nonlinear 3SLS problem with "nlsystemfit()". The equations are of the form y_it = f_i(x,t,theta) The functions f_i(.) have to be formulated as R-functions. When invoking "nlsystemfit()" I get the error Error in deriv.formula(eqns[[i]], names(parmnames)) : Function 'f1' is not in the derivatives table
2007 Dec 25
0
lustre performance question
Hi, We have one Lustre volume that is getting full and some other volumes that are totally empty. The one that is full is a little sluggish at times with the following messages appearing in syslog on the OSS - Lustre: 5809:0:(filter_io_26.c:698:filter_commitrw_write()) data1-OST0001: slow i_mutex 82s Lustre: 5809:0:(filter_io_26.c:711:filter_commitrw_write()) data1-OST0001: slow
2017 Jan 12
2
[PATCH v2 1/2] drm/nouveau: Don't enabling polling twice on runtime resume
As it turns out, on cards that actually have CRTCs on them we're already calling drm_kms_helper_poll_enable(drm_dev) from nouveau_display_resume() before we call it in nouveau_pmops_runtime_resume(). This leads us to accidentally trying to enable polling twice, which results in a potential deadlock between the RPM locks and drm_dev->mode_config.mutex if we end up trying to enable polling