Displaying 20 results from an estimated 3000 matches similar to: "Re: [zfs-discuss] Problems getting Lustre started with ZFS"
2013 Oct 22
0
Re: [zfs-discuss] ZFS/Lustre echo 0 >> max_cached_mb chewing 100% cpu
On 22 October 2013 16:21, Prakash Surya <surya1-i2BcT+NCU+M@public.gmane.org> wrote:
> This probably belongs on the Lustre mailing list.
I cross posted :)
> Regardless, I don''t
> think you want to do that (do you?). It''ll prevent any client side
> caching, and more importantly, I don''t think it''s a case that''s been
>
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
I''m evaluating lustre. I''m trying what I think is a basic/simple
ethernet config. with MDT and OST on the same node. Can someone tell
me if the following (~150 second recovery occurring when small 190 GB
OST is re-mounted) is expected behavior or if I''m missing something?
I thought I would send this and continue with the eval while awaiting
a
response.
I''m using
2013 Feb 12
2
Lost folders after changing MDS
OK, so our old MDS had hardware issues so I configured a new MGS / MDS on a VM (this is a backup lustre filesystem and I wanted to separate the MGS / MDS from OSS of the previous), and then did this:
For example:
mount -t ldiskfs /dev/old /mnt/ost_old
mount -t ldiskfs /dev/new /mnt/ost_new
rsync -aSv /mnt/ost_old/ /mnt/ost_new
# note trailing slash on ost_old/
If you are unable to connect both
2010 Sep 04
0
Set quota on Lustre system file client, reboots MDS/MGS node
Hi
I used lustre-1.8.3 for Centos5.4. I patched the kernel according to Lustre
1.8 operations manual.pdf.
I have a problem when I want to implement quota.
My cluster configuration is:
1. one MGS/MDS host (with two devices: sda and sdb,respectively)
with the following commands:
1) mkfs.lustre --mgs /dev/sda
2) mount -t lustre /dev/sda /mnt/mgt
3) mkfs.lustre --fsname=lustre
2008 Feb 12
0
Lustre-discuss Digest, Vol 25, Issue 17
Hi,
i just want to know whether there are any alternative file systems for HP SFS.
I heard that there is Cluster Gateway from Polyserve. Can anybody plz help me in finding more abt this Cluster Gateway.
Thanks and Regards,
Ashok Bharat
-----Original Message-----
From: lustre-discuss-bounces at lists.lustre.org on behalf of lustre-discuss-request at lists.lustre.org
Sent: Tue 2/12/2008 3:18 AM
2008 Jan 10
4
1.6.4.1 - active client evicted
Hi!
We''ve started to poke and prod at Lustre 1.6.4.1, and it seems to
mostly work (we haven''t had it OOPS on us yet like the earlier
1.6-versions did).
However, we had this weird incident where an active client (it was
copying 4GB files and running ls at the time) got evicted by the MDS
and all OST''s. After a while logs indicate that it did recover the
connection
2007 Oct 15
3
iptables rules for lustre 1.6.x and MGS recovery procedures
Hi,
I would like to know what TCP/UDP ports should i keep open in my
firewall policies on my MGS server such that I can have my MGS server
fire-walled. Also if in a event of loss of MGT would it be possible
to recreate the MGT without loosing data or bringing the filesystem
down (i.e. by using cached information from MDT''s and OST''s)
Thanks
Anand
2008 Mar 14
0
Help needed in Building lustre using pre-packaged releases
Hi,
Can anyone guide me in building the lustre using pre-packaged lustre release.I''m using Ubuntu 7.10 I want to build lustre using RHEL2.6 rpms available on my system.I''m referring how_to in wiki. but in that no detailed step by step procedure is given for building lustre using pre-packed release.
I''m in need of this.
Thanks and Regards,
Ashok Bharat
-----Original
2007 Jan 17
0
Lustre 1.6.0 beta7 is now available
NOTE: BETA SOFTWARE, NOT FOR PRODUCTION USE
Cluster File Systems is pleased to announce the next beta version of
Lustre 1.6, which includes the following new features:
* Dynamic service threads - within a small range, extra service threads
are started automatically when the request queue builds up.
* Mixed-endian environment fixes
* Easy permanent OST removal
* MGS failover
* MGS proc
2007 Jan 17
0
Lustre 1.6.0 beta7 is now available
NOTE: BETA SOFTWARE, NOT FOR PRODUCTION USE
Cluster File Systems is pleased to announce the next beta version of
Lustre 1.6, which includes the following new features:
* Dynamic service threads - within a small range, extra service threads
are started automatically when the request queue builds up.
* Mixed-endian environment fixes
* Easy permanent OST removal
* MGS failover
* MGS proc
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
Chris,
Perhaps you need to perform some write_conf like command. I''m not sure if this is needed in 1.6 or not.
Shane
----- Original Message -----
From: lustre-discuss-bounces at lists.lustre.org <lustre-discuss-bounces at lists.lustre.org>
To: lustre-discuss <lustre-discuss at lists.lustre.org>
Sent: Fri Mar 07 12:03:17 2008
Subject: Re: [Lustre-discuss] Multihomed
2007 Nov 16
5
Lustre Debug level
Hi,
Lustre manual 1.6 v18 says that that in production lustre debug level
should be set to fairly low. Manual also says that I can verify that
level by running following commands:
# sysctl portals.debug
This gives ne following error
error: ''portals.debug'' is an unknown key
cat /proc/sys/lnet/debug
gives output:
ioctl neterror warning error emerg ha config console
cat
2007 Nov 23
2
How to remove OST permanently?
All,
I''ve added a new 2.2 TB OST to my cluster easily enough, but this new
disk array is meant to replace several smaller OSTs that I used to have
of which were only 120 GB, 500 GB, and 700 GB.
Adding an OST is easy, but how do I REMOVE the small OSTs that I no
longer want to be part of my cluster? Is there a command to tell luster
to move all the file stripes off one of the nodes?
2008 Apr 15
4
NFS Performance
Hi,
With help from Oleg we got the right patches applied and NFS working
well. Maximum performance was about 60 MB/sec. Last week that dropped
to about 12.5 MB/sec and I cannot find a reason. Lustre clients all
obtain 100+ MB/sec on GigE. Each OST is good for 270 MB/sec. When
mounting the client on one of the OSSs I get 230 MB/sec. Seems the
speed is there. How can NFS and Lustre be tuned
2007 Oct 22
0
The mds_connect operation failed with -11
Hi, list:
I''m trying configure lustre with:
1 MGS -------------> 192.168.3.100 with mkfs.lustre --mgs /dev/md1 ;
mount -t lustre ...
1 MDT ------------> 192.168.3.101 with mkfs.lustre --fsname=datafs00
--mdt --mgsnode=192.168.3.100 /dev/sda3 ; mount -t lustre ...
4 ost -----------> 192.168.3.102-104 with mkfs.lustre --fsname=datafs00
--ost --mgsnode=192.168.3.100 at tcp0
2013 Oct 17
3
Speeding up configuration log regeneration?
Hi,
We run four-node Lustre 2.3, and I needed to both change hardware
under MGS/MDS and reassign an OSS ip. Just the same, I added a brand
new 10GE network to the system, which was the reason for MDS hardware
change.
I ran tunefs.lustre --writeconf as per chapter 14.4 in Lustre Manual,
and everything mounts fine. Log regeneration apparently works, since
it seems to do something, but
2010 Sep 09
1
What's the correct sequence to umount multiple lustre file system
Any recommendation about the sequence to umount multiple lustre file system with combined MGS/MDT or separate MGS, MDT. Thanks.
Ming
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20100909/396905b5/attachment.html
2008 Feb 05
2
lctl deactivate questions
Hi;
One of our OSTs filled up. Once we realized this,
we executed
lctl --device 9 deactivate
on our fs''s combo MDS/MGS machine.
We saw in the syslog that the OST in
question was deactivated:
Lustre: setting import ufhpc-OST0008_UUID INACTIVE by administrator request
However, ''lfs df'' on the clients does not show
that the OST is deactivated there, unless we *also*
2012 Oct 18
1
lfs_migrate question
Hi,
I suffered an oss crash where my oss server had a cpu fault. I have it running again, but I am trying to decommission it. I am migrating the data off of it onto other ost''s using the lfs find command with lfs_migrate.
It''s been nearly 36 hours and about 2 terabytes have been moved. This means I am about halfway. Is this a decent rate?
Here are the particulars, which
2013 Dec 17
2
Setting up a lustre zfs dual mgs/mdt over tcp - help requested
Hi all,
Here is the situation:
I have 2 nodes MDS1 , MDS2 (10.0.0.22 , 10.0.0.23) I wish to use as
failover MGS, active/active MDT with zfs.
I have a jbod shelf with 12 disks, seen by both nodes as das (the
shelf has 2 sas ports, connected to a sas hba on each node), and I
am using lustre 2.4 on centos 6.4 x64
I have created 3 zfs pools:
1. mgs:
# zpool