similar to: LDISKFS-fs warnings on MDS lustre 1.6.4.2

Displaying 20 results from an estimated 300 matches similar to: "LDISKFS-fs warnings on MDS lustre 1.6.4.2"

2007 Nov 07
9
How To change server recovery timeout
Hi, Our lustre environment is: 2.6.9-55.0.9.EL_lustre.1.6.3smp I would like to change recovery timeout from default value 250s to something longer I tried example from manual: set_timeout <secs> Sets the timeout (obd_timeout) for a server to wait before failing recovery. We performed that experiment on our test lustre installation with one OST. storage02 is our OSS [root at
2007 Nov 12
8
More failover issues
In 1.6.0, when creating a MDT, you could specify multiple --mgsnode options and it would failover between them. 1.6.3 only seems to take the last one and --mgsnode=192.168.1.252 at o2ib:192.168.1.253 at o2ib doesn''t seem to failover to the other node. Any ideas how to get around this? Robert Robert LeBlanc College of Life Sciences Computer Support Brigham Young University leblanc at
2010 Sep 30
1
ldiskfs-ext4 interoperability question
Our current Lustre servers run the version 1.8.1.1 with the regular ldiskfs. We are looking to expand our Lustre file system with new servers/storage and upgrade to all the lustre servers to 1.8.4 as well at the same time. We would like to make use of the ldiskfs-ext4 on the new servers to use larger OSTs. I just want to confirm the following facts: 1. Is is possible to run different versions
2007 Oct 01
1
fsck ldiskfs-backed OSTs?
There are references to running fsck on the lustre OSTs after a crash or power failure. However, after downloading the ClusterFS e2fsprogs and building them, e2fsck does not recognize our ldiskfs- based OSTs. Is there a way to fsck the ldiskfs-based OSTs? Thanks, Charlie Taylor UF HPC Center
2007 Nov 16
5
Lustre Debug level
Hi, Lustre manual 1.6 v18 says that that in production lustre debug level should be set to fairly low. Manual also says that I can verify that level by running following commands: # sysctl portals.debug This gives ne following error error: ''portals.debug'' is an unknown key cat /proc/sys/lnet/debug gives output: ioctl neterror warning error emerg ha config console cat
2012 Oct 09
1
MDS read-only
Dear all, Two of our MDS have got repeatedly read-only error recently after once e2fsck on lustre 1.8.5. After the MDT mounted for a while, the kernel will reports errors like: Oct 8 20:16:44 mainmds kernel: LDISKFS-fs error (device cciss!c0d1): ldiskfs_ext_check_inode: bad header/extent in inode #50736178: invalid magic - magic 0, entries 0, max 0(0), depth 0(0) Oct 8 20:16:44 mainmds
2004 Jun 02
2
[Patch] for bug 81
Index: namei.c =================================================================== --- namei.c (revision 968) +++ namei.c (working copy) @@ -526,7 +526,7 @@ status = -EBUSY; - if (!empty_dir(inode)) { + if ( S_ISDIR (inode->i_mode) && !empty_dir(inode)) { LOG_TRACE_STR ("dentry is not empty, cannot delete"); goto
2013 Feb 12
2
Lost folders after changing MDS
OK, so our old MDS had hardware issues so I configured a new MGS / MDS on a VM (this is a backup lustre filesystem and I wanted to separate the MGS / MDS from OSS of the previous), and then did this: For example: mount -t ldiskfs /dev/old /mnt/ost_old mount -t ldiskfs /dev/new /mnt/ost_new rsync -aSv /mnt/ost_old/ /mnt/ost_new # note trailing slash on ost_old/ If you are unable to connect both
2010 Oct 31
5
How to delete a whole destination tree (inclusive the destination its-self) with rsync (daemon)?
An HTML attachment was scrubbed... URL: <http://lists.samba.org/pipermail/rsync/attachments/20101031/bb482d8d/attachment.html>
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
I''m evaluating lustre. I''m trying what I think is a basic/simple ethernet config. with MDT and OST on the same node. Can someone tell me if the following (~150 second recovery occurring when small 190 GB OST is re-mounted) is expected behavior or if I''m missing something? I thought I would send this and continue with the eval while awaiting a response. I''m using
2010 Aug 14
0
Lost OSTs, remounted, now /proc/fs/lustre/obdfilter/$UUID/ is empty
Hello, We had a problem with our disk controller that required a reboot. 2 of our OSTs remounted and went through the recovery window but clients hang trying to access them. Also /proc/fs/lustre/obdfilter/$UUID/ is empty for that OST UUID. LDISKFS FS on dm-5, internal journal on dm-5:8 LDISKFS-fs: delayed allocation enabled LDISKFS-fs: file extents enabled LDISKFS-fs: mballoc enabled
2010 Jul 08
5
No space left on device on not full filesystem
Hello, We have running lustre 1.8.1 and have met "No space lest on device" error when uploading 500 Gb small files (less then 100 Kb each). The problem seems to depends on the number of files. If we remove one file, we can create one new file, even with Gb size; but if we haven''t remove something we can''t create even very little file, as an example using touch
2013 Oct 24
1
build and install lustre from source code -- warning on livcfs.ko for no modversions
I built and installed Lustre from source. When I installed built luster-modules- RPM, I got libcfs.ko from the kernel-2.6.18-prep has no modversions (the message is posted below), so it cannot be reused for kernel. Is there any procedure I need to do during the build? There is include/config/modversion.h under patched kernel, but the file does not have any contens and its size is 0. Thanks for
2017 Dec 18
3
sieve_pipe_socket_dir not created at startup for configured pipe service
Hi, all I?m new to the list but not to dovecot. I?ve been using it in a basic configuration for some time, but finally decided to tweak my deployed system to take advantage of some more dovecot features. In particular I?m trying to set up pigeonhole to implement spam retraining with imapsieve. All of this is with dovecot 2.2.31 (65cde28) and pigeonhole 0.4.19. Before going any further let me
2010 Jul 30
2
lustre 1.8.3 upgrade observations
Hello, 1) when compiling the lustre modules for the server the ./configure script behaves a bit odd. The --enable-server option is silently ignored when the kernel is not 100% patched. Unfortunatly the build works for the server, but during the mount the error message claims about a missing "lustre" module which is loaded and running. What is really missing are the ldiskfs et al
2010 Feb 05
0
Announce: Lustre 1.8.2 is available!
Hi all, Lustre 1.8.2 is available on the Sun Download Center Site. http://www.sun.com/software/products/lustre/get.jsp The change log and release notes can be read here: http://wiki.lustre.org/index.php/Use:Change_Log_1.8 Here are some items that may interest you in this release: * 16TB LUN is supported on RHEL5 with ext4-based ldiskfs and not with ext3-based ldiskfs. The default RHEL5
2010 Feb 05
0
Announce: Lustre 1.8.2 is available!
Hi all, Lustre 1.8.2 is available on the Sun Download Center Site. http://www.sun.com/software/products/lustre/get.jsp The change log and release notes can be read here: http://wiki.lustre.org/index.php/Use:Change_Log_1.8 Here are some items that may interest you in this release: * 16TB LUN is supported on RHEL5 with ext4-based ldiskfs and not with ext3-based ldiskfs. The default RHEL5
2014 May 26
0
Remove filesystem directories from MDT
Hello, I have some problems in my filesystem. When I browse the filesystem from a client, a specific directory have directories that contain the same directories, in other words: LustreFS-> dir A -> dir B -> dir B -> dir B -> dir B -> dir B… This directory, and its children, has the same obdidx/objid: [root@client vm-106-disk-1.raw]# lfs getstripe vm-106-disk-1.raw
2013 Sep 19
0
Files written to an OST are corrupted
Hi, everyone, I need some help in figuring out what may have happened here, as newly created files on an OST are being corrupted. I don''t know if this applies to all files written to this OST, or just to files of order 2GB size, but files are definitely being corrupted, with no errors reported by the OSS machine. Let me describe the situation. We had been running Lustre 1.8.4 for
2007 Nov 07
1
ll_cfg_requeue process timeouts
Hi, Our environment is: 2.6.9-55.0.9.EL_lustre.1.6.3smp I am getting following errors from two OSS''s ... Nov 7 10:39:51 storage09.beowulf.cluster kernel: LustreError: 23045:0:(client.c:519:ptlrpc_import_delay_req()) @@@ IMP_INVALID req at 00000100b410be00 x4190687/t0 o101->MGS at MGC10.143.245.201@tcp_0:26 lens 232/240 ref 1 fl Rpc:/0/0 rc 0/0 Nov 7 10:39:51