search for: obdclass

Displaying 5 results from an estimated 5 matches for "obdclass".

Did you mean: loadclass
2008 Mar 06
0
oss umount hangs forever
...0ec64270 [46237.843057] Call Trace: [46237.845815] [<ffffffff802e9f76>] log_wait_commit+0xa3/0xf5 [46237.851681] [<ffffffff802e48a0>] journal_stop+0x214/0x244 [46237.857341] [<ffffffff8851c259>] :obdfilter:filter_iocontrol+0x139/0xa20 [46237.864413] [<ffffffff8825fe64>] :obdclass:class_cleanup+0x514/0xfc0 [46237.871219] [<ffffffff88263aec>] :obdclass:class_process_config+0x137c/0x1a90 [46237.878749] [<ffffffff88264519>] :obdclass:class_manual_cleanup+0x319/0xf70 [46237.886137] [<ffffffff882707a1>] :obdclass:server_put_super+0xb61/0xf70 [46237.893111] [...
2008 Apr 15
5
o2ib module prevents shutdown
...use count of the module is one, but I don''t see where it''s used. # umount /mnt/lustre # ifconfig ib0 down # modprobe -r ko2iblnd FATAL: Module ko2iblnd is in use. # lsmod | grep ko2 ko2iblnd 143136 1 lnet 258088 5 lustre,ksocklnd,ko2iblnd,ptlrpc,obdclass libcfs 189784 12 osc,mgc,lustre,lov,lquota,mdc,ksocklnd,ko2iblnd,ptlrpc,obdclass,lnet,lvf s rdma_cm 65940 4 ko2iblnd,ib_iser,rdma_ucm,ib_sdp ib_core 88576 16 ko2iblnd,ib_iser,rdma_ucm,ib_ucm,ib_srp,ib_sdp,rdma_cm,ib_cm,iw_cm,ib_lo cal_sa,ib_ipoi...
2010 Jul 07
0
How to evict a dead client?
...ed 188807 times Jul 7 14:45:11 com01 kernel: BUG: soft lockup - CPU#15 stuck for 10s! [ll_ost_118:12180]Jul 7 14:45:11 com01 kernel: CPU 15: Jul 7 14:45:11 com01 kernel: Modules linked in: obdfilter(U) fsfilt_ldiskfs(U) ost(U) mgc(U) lustre(U) lov(U) mdc(U) lquota(U) osc(U) ksocklnd(U) ptlrpc(U) obdclass(U) lnet(U) lvfs(U) libcfs(U) ldiskfs(U) crc16(U) autofs4(U) hidp(U) rfcomm(U) l2cap(U) bluetooth(U) sunrpc(U) dm_multipath(U) scsi_dh(U) video(U) hwmon(U) backlight(U) sbs(U) i2c_ec(U) i2c_core(U) button(U) battery(U) asus_acpi(U) acpi_memhotplug(U) ac(U) ipv6(U) xfrm_nalgo(U) crypto_api(U) parport...
2008 Feb 04
32
Luster clients getting evicted
on our cluster that has been running lustre for about 1 month. I have 1 MDT/MGS and 1 OSS with 2 OST''s. Our cluster uses all Gige and has about 608 nodes 1854 cores. We have allot of jobs that die, and/or go into high IO wait, strace shows processes stuck in fstat(). The big problem is (i think) I would like some feedback on it that of these 608 nodes 209 of them have in dmesg
2008 Feb 12
0
Lustre-discuss Digest, Vol 25, Issue 17
...ca293>{:ptlrpc:ldlm_resource_putref+435} >>> <ffffffffa02dc2c9>{:ptlrpc:ldlm_prep_enqueue_req+313} >>> <ffffffffa0394e6f>{:mdc:mdc_enqueue+1023} >>> <ffffffffa02c1035>{:ptlrpc:lock_res_and_lock+53} >>> <ffffffffa0268730>{:obdclass:class_handle2object+224} >>> <ffffffffa02c5fea>{:ptlrpc:__ldlm_handle2lock+794} >>> <ffffffffa02c106f>{:ptlrpc:unlock_res_and_lock+31} >>> <ffffffffa02c5c03>{:ptlrpc:ldlm_lock_decref_internal+595} >>> <ffffffffa02c156c>...