Displaying 12 results from an estimated 12 matches for "obdfilter".
Did you mean:
medfilter
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
...afs /mnt/datafs
dmesg -c >dmesg.1
umount /mnt/datafs
umount /mnt/data/ost0
umount /mnt/data/mdt
e2label /dev/sda1
e2label /dev/sda2
dmesg -c >dmesg.2
mount.lustre /dev/sda1 /mnt/data/mdt
mount.lustre /dev/sda2 /mnt/data/ost0
dmesg -c >dmesg.3
while cat /proc/fs/lustre/obdfilter/datafs-OST0000/recovery_status \
| egrep ''RECOVERING|time remaining'';do sleep 30;done
mount.lustre pool4 at tcp:/datafs /mnt/datafs
}
aaa 2>&1 | tee aaa.0; dmesg -c >dmesg.4
The files dmesg.{0,1,2,3,4} and aaa.0 are available at:
http://fnapcf.fnal.gov/~ron/lu...
2010 Aug 14
0
Lost OSTs, remounted, now /proc/fs/lustre/obdfilter/$UUID/ is empty
Hello,
We had a problem with our disk controller that required a reboot. 2 of
our OSTs remounted and went through the recovery window but clients
hang trying to access them. Also /proc/fs/lustre/obdfilter/$UUID/ is
empty for that OST UUID.
LDISKFS FS on dm-5, internal journal on dm-5:8
LDISKFS-fs: delayed allocation enabled
LDISKFS-fs: file extents enabled
LDISKFS-fs: mballoc enabled
LDISKFS-fs: mounted filesystem dm-5 with ordered data mode
Lustre: 16377:0:(filter.c:990:filter_init_server_data())...
2010 Aug 06
1
Depreciated client still shown on OST exports
Some clients have been removed several weeks ago but are still listed in:
ls -l /proc/fs/lustre/obdfilter/*/exports/
This was found after tracing back mystery tcp packets to the OSS.
Although this is causing no damage, it raises the question of when
former clients will be cleared from the OSS. Is there a way to manually
remove these exports from the OSS?
--
Regards,
David
2010 Sep 13
2
1.8.4 and write-through cache
...mins later, they would lock
up again.
The oss''s were dumping stacks all over the place, crawling along and
generally making our lustrefs unuseable.
After trying different kernels, raid card drivers, changing write back
policy on the raid cards etc. the solution was to
lctl set_param obdfilter.*.writethrough_cache_enable=0
lctl set_param obdfilter.*.read_cache_enable=0
on all the nodes with the 3ware cards.
Has anyone else seen this? I am completely baffled as to why it only
affects our nodes with 3ware cards.
These nodes were working very well under 1.8.3...
--
Dr Stuart Midg...
2010 Jul 13
4
Enable async journals
...ed.
Question is whether the procedure:
umount <filesystem> on all clients
umount <osts> on all OSSes
e2fsck <ost-device> on all OSSes for all all OSTs
tune2fs -O ^has_journal <ost-device> on all OSSes for all all OSTs
lctl set_param obdfilter.*.sync_journal=0 on all OSSes
mount <osts> on all OSSes
mount <filesystem> on all clients
is correct to do the job? (I hope it isn''t neccessary to recreate a FS
from scratch.) Many thanks in advance.
Cheers
-Frank Heckes
P.S.: 1.8.1.1 still...
2007 Nov 07
9
How To change server recovery timeout
...obd_timeout) for a server
to wait before failing recovery.
We performed that experiment on our test lustre installation with one
OST.
storage02 is our OSS
[root at storage02 ~]# lctl dl
0 UP mgc MGC10.143.245.3 at tcp 31259d9b-e655-cdc4-c760-45d3df426d86 5
1 UP ost OSS OSS_uuid 3
2 UP obdfilter home-md-OST0001 home-md-OST0001_UUID 7
[root at storage02 ~]# lctl --device 2 set_timeout 600
set_timeout has been deprecated. Use conf_param instead.
e.g. conf_param lustre-MDT0000 obd_timeout=50
usage: conf_param obd_timeout=<secs>
run <command> after connecting to device <devno&g...
2008 Mar 06
0
oss umount hangs forever
...ec640c0
[46237.835488] ffff81012b6a96b0 000028d210c60a79 0000000000001328 ffff81000ec64270
[46237.843057] Call Trace:
[46237.845815] [<ffffffff802e9f76>] log_wait_commit+0xa3/0xf5
[46237.851681] [<ffffffff802e48a0>] journal_stop+0x214/0x244
[46237.857341] [<ffffffff8851c259>] :obdfilter:filter_iocontrol+0x139/0xa20
[46237.864413] [<ffffffff8825fe64>] :obdclass:class_cleanup+0x514/0xfc0
[46237.871219] [<ffffffff88263aec>] :obdclass:class_process_config+0x137c/0x1a90
[46237.878749] [<ffffffff88264519>] :obdclass:class_manual_cleanup+0x319/0xf70
[46237.886137] [...
2013 May 27
1
Query on improving throughput
Dear All,
We have a small setup of lustre with 7 OSTs on 8gb FC . We have kept one
OST per FC port. We have lustre 2.3 with CentOS 6.3. There are 32 clients
which access this over FDR IB. We can achieve more than 1.3GB/s
throughput using IOR, without cache. Which is roughly 185MB/s per OST. We
wanted to know if this is normal. Should we expect more from 8gb FC port.
OSTs are on 8+2 RAID6 .
2014 Nov 13
0
OST acting up
...using Lustre 2.4.2 and have an OST that doesn't seem to be written to.
When I check the MDS with 'lctl dl' I do not see that OST in the list.
However when I check the OSS that OST belongs to I can see it is mounted
and up;
0 UP osd-zfs l2-OST0003-osd l2-OST0003-osd_UUID 5
3 UP obdfilter l2-OST0003 l2-OST0003_UUID 5
4 UP lwp l2-MDT0000-lwp-OST0003 l2-MDT0000-lwp-OST0003_UUID 5
Since it isn't written to (the MDS doesn't seem to know about it, I
created a directory. The index of that OST is 3 so I did a "lfs
setstripe -i 3 -c 1 /mnt/l2-lustre/test-37" to for...
2004 Jan 11
3
Lustre 1.0.2 packages available
Greetings--
Packages for Lustre 1.0.2 are now available in the usual place
http://www.clusterfs.com/download.html
This bug-fix release resolves a number of issues, of which a few are
user-visible:
- the default debug level is now a more reasonable production value
- zero-copy TCP is now enabled by default, if your hardware supports it
- you should encounter fewer allocation failures
2004 Jan 11
3
Lustre 1.0.2 packages available
Greetings--
Packages for Lustre 1.0.2 are now available in the usual place
http://www.clusterfs.com/download.html
This bug-fix release resolves a number of issues, of which a few are
user-visible:
- the default debug level is now a more reasonable production value
- zero-copy TCP is now enabled by default, if your hardware supports it
- you should encounter fewer allocation failures
2010 Jul 07
0
How to evict a dead client?
...al_launch_packet()) No usable routes to 12345-202.122.37.79 at tcp
Jul 7 14:45:11 com01 last message repeated 188807 times
Jul 7 14:45:11 com01 kernel: BUG: soft lockup - CPU#15 stuck for 10s! [ll_ost_118:12180]Jul 7 14:45:11 com01 kernel: CPU 15:
Jul 7 14:45:11 com01 kernel: Modules linked in: obdfilter(U) fsfilt_ldiskfs(U) ost(U) mgc(U) lustre(U) lov(U) mdc(U) lquota(U) osc(U) ksocklnd(U) ptlrpc(U) obdclass(U) lnet(U) lvfs(U) libcfs(U) ldiskfs(U) crc16(U) autofs4(U) hidp(U) rfcomm(U) l2cap(U) bluetooth(U) sunrpc(U) dm_multipath(U) scsi_dh(U) video(U) hwmon(U) backlight(U) sbs(U) i2c_ec(U) i2c_cor...