Displaying 20 results from an estimated 700 matches similar to: "How to evict a dead client?"
2008 Feb 22
0
lustre error
Dear All,
Yesterday evening or cluster has stopped.
Two of our nodes tried to take the resource from each other, they
haven''t seen the other side, if I saw well.
I stopped heartbeat, resources, start it again, and back to online,
worked fine.
This morning I saw this in logs:
Feb 22 03:25:07 node4 kernel: Lustre:
7:0:(linux-debug.c:98:libcfs_run_upcall()) Invoked LNET upcall
2011 Feb 11
2
Not able to capture detailed CPU information of the guest machine using Libvirt API.
Hi ,
I have two KVM guests in ubuntu host machine.I am using Python binding of
Libvirt API to query on the hypervisor and capture the CPU , memory related
information of the guest machines.
I need to capture the detail information regarding CPU like : cpu_aidle,
cpu_idle, cpu_speed, cpu_wio and memory like
:mem_cached,mem_buffers,mem_free etc. of the guest machines.
How could I get these
2008 Jan 15
19
How do you make an MGS/OSS listen on 2 NICs?
I am running on CentOS 5 distribution without adding any updates from CentOS. I am using the lustre 1.6.4.1 kernel and software.
I have two NICs that run though different switches.
I have the lustre options in my modprobe.conf to look like this:
options lnet networks=tcp0(eth1,eth0)
My MGS seems to be only listening on the first interface however.
When I try and ping the 1st interface (eth1)
2008 Jan 10
4
1.6.4.1 - active client evicted
Hi!
We''ve started to poke and prod at Lustre 1.6.4.1, and it seems to
mostly work (we haven''t had it OOPS on us yet like the earlier
1.6-versions did).
However, we had this weird incident where an active client (it was
copying 4GB files and running ls at the time) got evicted by the MDS
and all OST''s. After a while logs indicate that it did recover the
connection
2007 Dec 14
1
evicting clients when shutdown cleanly?
Should I be seeing messages like:
Dec 14 12:06:59 nyx170 kernel: Lustre: MGS: haven''t heard from client
dadccfac-8610-06e7-9c02-90e552694947 (at 141.212.30.185 at tcp) in 234
seconds. I think it''s dead, and I am evicting it.
when the client was shut down cleanly? and the lustre file system is
mounted via /etc/fstab ? The file system (i would hope) would be
unmounted
2010 Jan 16
0
[PATCH] drm/nouveau: Evict buffers in VRAM before freeing sgdma
Currently, we take down the sgdma engine without evicting all buffers
from VRAM.
The TTM device release will try to evict anything in VRAM to GART
memory, but this will fail since sgdma has already been taken down.
This causes an infinite loop in kernel mode on module unload.
It usually doesn't happen because there aren't any buffer on close.
However, if the GPU is locked up, this
2010 Mar 06
0
[PATCH] drm/nouveau: Never evict VRAM buffers to system.
VRAM->system is a synchronous operation: it involves scheduling a
VRAM->TT DMA transfer and stalling the CPU until it's finished so that
we can unbind the new memory from the translation tables. VRAM->TT can
always be performed asynchronously, even if TT is already full and we
have to move something out of it.
Additionally, allowing VRAM->system behaves badly under heavy memory
2006 Nov 23
1
(OT) HylaFAX, IAXModem, Asterisk
I have all three running on the same box. I say OT because it appears
asterisk is doing it's job just fine. It must be an IAXmodem or
faxgetty (hylafax) problem
When faxes work, they look great. I have ten IAXmodems setup with
different ports and they register fine. I have ten faxgettys that
startup fine. I start the IAXmodems and then faxgettys in inittab.
They are setup as a roll
2010 Sep 03
1
Compiling lustre-client 2.0.0.1 on RHEL 4
Hi,
I tried to compile lustre-client 2.0.0.1 on RHEL4 with kernel
2.6.9-89.0.28.EL-x86_64 and I got 3 errors and 1 warning during the
compile.
The compile is executed with -Werror option, and it fails in all 4 cases
* Error: lustre_compat25.h
CC [M] /usr/src/redhat/BUILD/lustre-2.0.0.1/lustre/fid/fid_handler.o
In file included
from
2013 Mar 18
1
lustre showing inactive devices
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows:
[code]
[root at MDS ~]# lctl list_nids
10.94.214.185 at tcp
[root at MDS ~]#
[/code]
On Lustre Client1:
[code]
[root at lustreclient1 lustre]# lfs df -h
UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6%
/mnt/lustre[MDT:0]
2008 Feb 12
0
Lustre-discuss Digest, Vol 25, Issue 17
Hi,
i just want to know whether there are any alternative file systems for HP SFS.
I heard that there is Cluster Gateway from Polyserve. Can anybody plz help me in finding more abt this Cluster Gateway.
Thanks and Regards,
Ashok Bharat
-----Original Message-----
From: lustre-discuss-bounces at lists.lustre.org on behalf of lustre-discuss-request at lists.lustre.org
Sent: Tue 2/12/2008 3:18 AM
2013 Mar 18
1
OST0006 : inactive device
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows:
[code]
[root at MDS ~]# lctl list_nids
10.94.214.185 at tcp
[root at MDS ~]#
[/code]
On Lustre Client1:
[code]
[root at lustreclient1 lustre]# lfs df -h
UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0]
lustre-OST0000_UUID
2008 Mar 14
0
Help needed in Building lustre using pre-packaged releases
Hi,
Can anyone guide me in building the lustre using pre-packaged lustre release.I''m using Ubuntu 7.10 I want to build lustre using RHEL2.6 rpms available on my system.I''m referring how_to in wiki. but in that no detailed step by step procedure is given for building lustre using pre-packed release.
I''m in need of this.
Thanks and Regards,
Ashok Bharat
-----Original
2010 Jul 08
1
Cortado patch to optionally zero basetime
This patch adds an applet option to Cortado. The option is off by
default, meaning that the default behavior of Cortado should not change at
all. If the option is activated, Cortado will display all times relative
to the ogg file's basetime (i.e. first granule).
This patch was written in response to a request from Se?or Ellery, who
noted that files ripped from the middle of a stream do not
2013 Dec 17
2
Setting up a lustre zfs dual mgs/mdt over tcp - help requested
Hi all,
Here is the situation:
I have 2 nodes MDS1 , MDS2 (10.0.0.22 , 10.0.0.23) I wish to use as
failover MGS, active/active MDT with zfs.
I have a jbod shelf with 12 disks, seen by both nodes as das (the
shelf has 2 sas ports, connected to a sas hba on each node), and I
am using lustre 2.4 on centos 6.4 x64
I have created 3 zfs pools:
1. mgs:
# zpool
2008 Mar 25
2
patchless kernel
Dear All,
make[5]: Entering directory `/usr/src/kernels/2.6.23.15-80.fc7-x86_64''
/usr/src/redhat/BUILD/lustre-1.6.4.3/lustre/llite/lloop.c:142: warning:
''request_queue_t'' is deprecated
/usr/src/redhat/BUILD/lustre-1.6.4.3/lustre/llite/lloop.c:273: warning:
''request_queue_t'' is deprecated
/usr/src/redhat/BUILD/lustre-1.6.4.3/lustre/llite/lloop.c:312:
2010 Aug 14
0
Lost OSTs, remounted, now /proc/fs/lustre/obdfilter/$UUID/ is empty
Hello,
We had a problem with our disk controller that required a reboot. 2 of
our OSTs remounted and went through the recovery window but clients
hang trying to access them. Also /proc/fs/lustre/obdfilter/$UUID/ is
empty for that OST UUID.
LDISKFS FS on dm-5, internal journal on dm-5:8
LDISKFS-fs: delayed allocation enabled
LDISKFS-fs: file extents enabled
LDISKFS-fs: mballoc enabled
2009 Feb 13
1
error with make
Hi,
I am trying to compile the R-dev version on a unix Suse machine
and got errors.
Would someone be able to help me determine what to do to fix
these errors:
make[1]: Entering directory `/lustre/people/schaffer/R-devel/m4'
make[1]: Nothing to be done for `R'.
make[1]: Leaving directory `/lustre/people/schaffer/R-devel/m4'
make[1]: Entering directory
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
I''m evaluating lustre. I''m trying what I think is a basic/simple
ethernet config. with MDT and OST on the same node. Can someone tell
me if the following (~150 second recovery occurring when small 190 GB
OST is re-mounted) is expected behavior or if I''m missing something?
I thought I would send this and continue with the eval while awaiting
a
response.
I''m using
2010 Sep 04
0
Set quota on Lustre system file client, reboots MDS/MGS node
Hi
I used lustre-1.8.3 for Centos5.4. I patched the kernel according to Lustre
1.8 operations manual.pdf.
I have a problem when I want to implement quota.
My cluster configuration is:
1. one MGS/MDS host (with two devices: sda and sdb,respectively)
with the following commands:
1) mkfs.lustre --mgs /dev/sda
2) mount -t lustre /dev/sda /mnt/mgt
3) mkfs.lustre --fsname=lustre