Displaying 20 results from an estimated 36 matches for "ldiskf".
Did you mean:
ldiskfs
2010 Sep 30
1
ldiskfs-ext4 interoperability question
Our current Lustre servers run the version 1.8.1.1 with the regular ldiskfs.
We are looking to expand our Lustre file system with new servers/storage and upgrade to all the lustre servers to 1.8.4 as well at the same time. We
would like to make use of the ldiskfs-ext4 on the new servers to use larger OSTs.
I just want to confirm the following facts:
1. Is is possible...
2007 Oct 01
1
fsck ldiskfs-backed OSTs?
There are references to running fsck on the lustre OSTs after a crash
or power failure. However, after downloading the ClusterFS
e2fsprogs and building them, e2fsck does not recognize our ldiskfs-
based OSTs. Is there a way to fsck the ldiskfs-based OSTs?
Thanks,
Charlie Taylor
UF HPC Center
2008 Feb 12
1
LDISKFS-fs warnings on MDS lustre 1.6.4.2
Hi Folks,
We can see these massages on our MDS
Feb 12 12:46:08 mds01 kernel: LDISKFS-fs warning (device dm-0):
empty_dir: bad directory (dir #31452569) - no `.'' or `..''
Feb 12 12:46:08 mds01 kernel: LDISKFS-fs warning (device dm-0):
ldiskfs_rmdir: empty directory has too many links (3)
It seem to indicate that we have bad(corrupted) directory. Do you have...
2007 Nov 12
8
More failover issues
In 1.6.0, when creating a MDT, you could specify multiple --mgsnode options
and it would failover between them. 1.6.3 only seems to take the last one
and --mgsnode=192.168.1.252 at o2ib:192.168.1.253 at o2ib doesn''t seem to failover
to the other node. Any ideas how to get around this?
Robert
Robert LeBlanc
College of Life Sciences Computer Support
Brigham Young University
leblanc at
2010 Aug 14
0
Lost OSTs, remounted, now /proc/fs/lustre/obdfilter/$UUID/ is empty
Hello,
We had a problem with our disk controller that required a reboot. 2 of
our OSTs remounted and went through the recovery window but clients
hang trying to access them. Also /proc/fs/lustre/obdfilter/$UUID/ is
empty for that OST UUID.
LDISKFS FS on dm-5, internal journal on dm-5:8
LDISKFS-fs: delayed allocation enabled
LDISKFS-fs: file extents enabled
LDISKFS-fs: mballoc enabled
LDISKFS-fs: mounted filesystem dm-5 with ordered data mode
Lustre: 16377:0:(filter.c:990:filter_init_server_data()) RECOVERY:
service scratch-OST0007, 281 reco...
2013 Oct 24
1
build and install lustre from source code -- warning on livcfs.ko for no modversions
...size is 0.
Thanks for your help.
Here were messages when I installed RPM:
# rpm -ivh kernel-2.6.18prep-1.i386.rpm
Preparing... ########################################### [100%]
package kernel-2.6.18prep-1.i386 is already installed
[root@localhost myrpms]# rpm -ivh lustre-ldiskfs-3.1.4-2.6.18_prep_201310231743.i386.rpm
Preparing... ########################################### [100%]
1:lustre-ldiskfs ########################################### [100%]
[root@localhost myrpms]# rpm -ivh lustre-modules-1.8.5-2.6.18_prep_201310231743.i386.rpm
Preparing.....
2012 Sep 27
4
Bad reporting inodes free
Hello,
When I run a "df -i" in my clients I get 95% indes used or 5% inodes free:
Filesystem Inodes
IUsed IFree IUse% Mounted on
lustre-mds-01:lustre-mds-02:/cetafs 22200087 20949839 1250248 95%
/mnt/data
But if I run lfs df -i i get:
UUID Inodes IUsed
IFree I
2010 Jul 30
2
lustre 1.8.3 upgrade observations
...pt behaves a bit odd.
The --enable-server option is silently ignored when the kernel is not 100% patched.
Unfortunatly the build works for the server, but during the mount the error message claims about a missing "lustre" module which is loaded and running.
What is really missing are the ldiskfs et al module(s).
My feature request is that "./configure --enable-server" should fail when it is not able to build the server modules, instead of printing the following message:
<snip>
checking if kernel defines unshare_fs_struct()... no
checking for /usr/src/linux-2.6.27.39-0.3/...
2010 Aug 11
3
Failure when mounting Lustre
Hi,
I get the following error when I try to mount lustre on the clients.
Permanent disk data:
Target: lustre-OSTffff
Index: unassigned
Lustre FS: lustre
Mount type: ldiskfs
Flags: 0x72
(OST needs_index first_time update )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=164.107.119.231 at tcp
sh: losetup: command not found
mkfs.lustre: error 32512 on losetup: Unknown error 32512
mkfs.lustre FATAL: Loop device setup for /dev/sdb failed: U...
2012 Oct 09
1
MDS read-only
Dear all,
Two of our MDS have got repeatedly read-only error recently after once e2fsck on lustre 1.8.5. After the MDT mounted for a while, the kernel will reports errors like:
Oct 8 20:16:44 mainmds kernel: LDISKFS-fs error (device cciss!c0d1): ldiskfs_ext_check_inode: bad header/extent in inode #50736178: invalid magic - magic 0, entries 0, max 0(0), depth 0(0)
Oct 8 20:16:44 mainmds kernel: Aborting journal on device cciss!c0d1-8.
And make the MDS read-only.
This problem has made about 1...
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
...3-e2fsprog/
Here is aaa.0 which shows the recovery:
+ dmesg -c
+ mkfs.lustre --fsname datafs --mdt --mgs --reformat /dev/sda1
WARNING: MDS group upcall is not set, use ''NONE''
Permanent disk data:
Target: datafs-MDTffff
Index: unassigned
Lustre FS: datafs
Mount type: ldiskfs
Flags: 0x75
(MDT MGS needs_index first_time update )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
Parameters:
device size = 95367MB
formatting backing filesystem ldiskfs on /dev/sda1
target name datafs-MDTffff
4k blocks 0
options...
2007 Nov 07
9
How To change server recovery timeout
Hi,
Our lustre environment is:
2.6.9-55.0.9.EL_lustre.1.6.3smp
I would like to change recovery timeout from default value 250s to
something longer
I tried example from manual:
set_timeout <secs> Sets the timeout (obd_timeout) for a server
to wait before failing recovery.
We performed that experiment on our test lustre installation with one
OST.
storage02 is our OSS
[root at
2010 Aug 11
3
Version mismatch of Lustre client and server
Hello,
I am planning on deploying a few more clients in my lustre environment and
was wondering which client version to install. I know it is okay to run a
newer client version than your lustre server for upgrade purposes. However,
would it be okay to be in this state for a longer period of time (for the
life of this filesystem)? My lustre server is currently running 1.8.1.1 on
RHEL 5.3 and I
2010 Feb 05
0
Announce: Lustre 1.8.2 is available!
...n the Sun Download Center Site.
http://www.sun.com/software/products/lustre/get.jsp
The change log and release notes can be read here:
http://wiki.lustre.org/index.php/Use:Change_Log_1.8
Here are some items that may interest you in this release:
* 16TB LUN is supported on RHEL5 with ext4-based ldiskfs and
not with ext3-based ldiskfs. The default RHEL5 rpms still
use ext3 and specific rpms are provided with ext4 support.
Naming for the files uses "-ext4" following the kernel
version string. Please note that this was manually done
for this release, so internal package name wil...
2010 Feb 05
0
Announce: Lustre 1.8.2 is available!
...n the Sun Download Center Site.
http://www.sun.com/software/products/lustre/get.jsp
The change log and release notes can be read here:
http://wiki.lustre.org/index.php/Use:Change_Log_1.8
Here are some items that may interest you in this release:
* 16TB LUN is supported on RHEL5 with ext4-based ldiskfs and
not with ext3-based ldiskfs. The default RHEL5 rpms still
use ext3 and specific rpms are provided with ext4 support.
Naming for the files uses "-ext4" following the kernel
version string. Please note that this was manually done
for this release, so internal package name wil...
2007 Oct 08
5
patchless client on RHEL4
Is there instructions on how to use the patchless client on RHEL4 ?
For version 1.6.2
We would prefer a rpm, but we are not scared of doing a build if
needed.
Brock Palen
Center for Advanced Computing
brockp at umich.edu
(734)936-1985
2014 May 26
0
Remove filesystem directories from MDT
...s getstripe vm-106-disk-1.raw
vm-106-disk-1.raw/vm-106-disk-1.raw
lmm_stripe_count: 1
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 19
obdidx objid objid group
19 7329413 0x6fd685 0
But when I mount the MDT with “Ldiskfs” I can see correctly the filesystem in the “ROOT” mdt directory.
I wish to remove from the client this looped directory (dir B in the example), but when I try to remove I get a “kernel panic" in the MDS/MGS.
is it a good idea remove a subdirectory in the MDT (with Ldiskfs mounted) in the “R...
2008 Mar 06
2
strange lustre errors
...tcp changed handle from
0x9ee58a75fddf2754 to 0x9ee58a761d190462; copying, but this may
foreshadow disaster
Any help in interpreting these error messages is much
appreciated. The two lustre oss/mds/mgs servers have been running fine
with an uptime for over a month now after the ldiskfs patch is applied
as mentioned here -
https://bugzilla.lustre.org/show_bug.cgi?id=13438
The version of Lustre is 1.6.3.
Thanks
Balagopal
2007 Oct 25
1
Error message
...->MGS at MGC192.168.0.200@tcp_0:26 lens 176/184
ref 1 fl Rpc:/0/0 rc 0/0
Oct 25 14:20:14 oss2 kernel: LustreError:
3228:0:(client.c:519:ptlrpc_import_delay_req()) Skipped 39 previous
similar messages
Platform Details:
RHEL 4.5; i686
kernel-lustre-smp-2.6.9-55.0.9.EL_lustre.1.6.3
lustre-ldiskfs-3.0.2-2.6.9_55.0.9.EL_lustre.1.6.3smp
lustre-1.6.3-2.6.9_55.0.9.EL_lustre.1.6.3smp
lustre-modules-1.6.3-2.6.9_55.0.9.EL_lustre.1.6.3smp
Thanks in advance,
_________________________________________
Ron Jerome
Programmer/Analyst
National Research Council Canada
M-2, 1200 Montreal Road, Ott...
2013 Sep 19
0
Files written to an OST are corrupted
...is is a 9-disk, RAID-5, 5.5TB volume, on a
Dell MD1000 shelf using a PERC-6 controller. A second 5-disk RAID-5
shares the shelf, with the 15th disk as a hot spare, and that second
volume is not having issues.
289 mkdir reformat
290 cd reformat
292 mkdir -p /mnt/ost
293 mount -t ldiskfs /dev/sdc /mnt/ost
294 mkdir sdc
295 pushd /mnt/ost
296 cp -p last_rcvd /root/reformat/sdc
297 cd O
298 cd 0
299 cp -p LAST_ID /root/reformat/sdc
300 cd ../..
301 cp -p CONFIGS/* /root/reformat/sdc
304 umount /mnt/ost
At this point, the web interface of Dell'...