Displaying 20 results from an estimated 1000 matches similar to: "Understanding lustre setup .."
2007 Nov 29
2
Balancing I/O Load
We are seeing some disturbing (probably due to our ignorance)
behavior from lustre 1.6.3 right now. We have 8 OSSs with 3 OSTs
per OSS (24 physical LUNs). We just created a brand new lustre file
system across this configuration using the default mkfs.lustre
formatting options. We have this file system mounted across 400
clients.
At the moment, we have 63 IOzone threads running
2013 Mar 18
1
lustre showing inactive devices
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows:
[code]
[root at MDS ~]# lctl list_nids
10.94.214.185 at tcp
[root at MDS ~]#
[/code]
On Lustre Client1:
[code]
[root at lustreclient1 lustre]# lfs df -h
UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6%
/mnt/lustre[MDT:0]
2010 Aug 17
18
write RPC & congestion
Hi, thanks for previous help.
I have some question about Lustre RPC and the sequence of events that
occur during large concurrent write() involving many processes and large
data size per process. I understand there is a mechanism of flow
control by credits, but I''m a little unclear on how it works in general
after reading the "networking & io protocol" white paper.
Is
2013 Mar 18
1
OST0006 : inactive device
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows:
[code]
[root at MDS ~]# lctl list_nids
10.94.214.185 at tcp
[root at MDS ~]#
[/code]
On Lustre Client1:
[code]
[root at lustreclient1 lustre]# lfs df -h
UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0]
lustre-OST0000_UUID
2013 Feb 19
2
Xyratex News Regarding Lustre - Press Release
Greetings Community!
Today we are very excited to announce that Xyratex has purchased Lustre? and its assets from Oracle. We intend for Lustre to remain an open-source, community-driven file system to be promoted by our community organizations. We undertook the acquisition because we realize its importance to the entire community and we want to help ensure that it will continue to deliver for all
2009 Nov 06
4
Hadoop Cluster on Xen
Hi all,
Has anyone created a Xen cluster to run a hadoop vm cluster?
I would be interested in how it performs
Thanks
Lance
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
2011 Jun 22
2
Queries regarding Lustre Throughput Numbers with mdtest benchmark
Hi,
I have a query regarding Lustre Throughput Numbers with mdtest benchmark.I
am running mdtest benhmark with following options :-
/home/meshram/mpich2-new/mpich2-1.4/mpich2-install/bin/mpirun -np 256
-hostfile ./hostfile ./mdtest -z 3 -b 10 -I 5 -v -d /tmp/l66
where ,
mdtest - is the standard benchmark to test metadata operations. [
https://computing.llnl.gov/?set=code&page=sio_downloads
2011 Jun 22
2
Queries regarding Lustre Throughput Numbers with mdtest benchmark
Hi,
I have a query regarding Lustre Throughput Numbers with mdtest benchmark.I
am running mdtest benhmark with following options :-
/home/meshram/mpich2-new/mpich2-1.4/mpich2-install/bin/mpirun -np 256
-hostfile ./hostfile ./mdtest -z 3 -b 10 -I 5 -v -d /tmp/l66
where ,
mdtest - is the standard benchmark to test metadata operations. [
https://computing.llnl.gov/?set=code&page=sio_downloads
2012 Mar 15
28
Lustre and cross-platform portability
Whamcloud and EMC are jointly investigating how to be able to contribute the Lustre client code into the upstream Linux kernel.
As a prerequisite to this, EMC is working to clean up the Lustre client code to better match the kernel coding style, and one of the anticipated major obstacles to upstream kernel submission is the heavy use of code abstraction via libcfs for portability to other
2012 Mar 15
28
Lustre and cross-platform portability
Whamcloud and EMC are jointly investigating how to be able to contribute the Lustre client code into the upstream Linux kernel.
As a prerequisite to this, EMC is working to clean up the Lustre client code to better match the kernel coding style, and one of the anticipated major obstacles to upstream kernel submission is the heavy use of code abstraction via libcfs for portability to other
2012 Dec 04
1
possible file corruption
Hello,
I have a troubling issue with random file corruption using either lustre
1.8.6 (internal Cray lustre) and lustre 2.1 (sonexion - produced by
xyratex).
Randomly, our users will come across an issue with files either having 0
size, or being corrupted. The 0 size files are usually ascii files
(which are normally created with simple cat and awk statements,
serially), while the corrupted
2010 Jun 16
6
clustered file system of choice
Hi all,
I am just trying to consider my options for storing a large mass of
data (tens of terrabytes of files) and one idea is to build a
clustered FS of some kind. Has anybody had any experience with that?
Any recommendations?
Thanks in advance for any and all advice.
Boris.
2008 Dec 10
2
VTd not showing PCI device in VM
I''m having trouble actually seeing a PCI device in my VMs. I''ve
resolved several of my issues using previous posts and using the
VTdHowTo wiki page. I have both VT and VTd BIOS options enabled. I
have pciback hiding the devices and xm can list and assign the devices
to VMs. I don''t see the PCI devices in my VMs though. I have tried a
windows and a Linux vm, without
2013 Aug 16
2
Xyratex disk units
I am wondering if any one knows of a way to manage Xyratex disk shelves from CentOS (in particular CentOS 4).
More details:
Some years ago I installed a NAS unit from Exanet which consists of 2 rebadged IBM x3650 head nodes and a couple of Xyratex disk shelves with a total of 96 TB of raw disk, connected by fibre channel. The operating system is based on CentOS 4.4, but is modified, and runs a
2013 May 20
1
Glusterfs-Hadoop
Hi,
Where can I find glusterfs-hadoop-0.20.2-0.1.x86_64.rpm?
The following link is from the Gluster FS Admin Guide, but it doesn't exist:
http://download.gluster.com/pub/gluster/glusterfs/qa-releases/3.3-beta-2/glusterfs-hadoop-0.20.2-0.1.x86_64.rpm
Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2009 May 07
4
problem with conditionals
I''m new to puppet. I''m trying to use some real case examples to better
understand how Puppet works.
Here''s my case:
exec { "usermod -d /home/hadoop -s /bin/bash hadoop":
unless => "test `grep ^hadoop /etc/passwd | awk -F: ''{print
$6}''` == ''/home/hadoop''"
}
The idea is the usermod would only
2013 Oct 09
2
Error while running MR using rmr2
Hi,
I have trying to run a simple MR program using rmr2 in a single node Hadoop
cluster. Here is the environment for the setup
Ubuntu 12.04 (32 bit)
R (Ubuntu comes with 2.14.1, so updated to 3.0.2)
Installed the latest rmr2 and rhdfs from
here<https://github.com/RevolutionAnalytics/RHadoop/wiki/Downloads>and
the corresponding dependencies
Hadoop 1.2.1
Now I am trying to run a simple MR
2011 Oct 19
1
gluster map/reduce performance..
Hi, all,
i try to check the performance of Map/Reduce of Gluster File system.
Mapper side speed is quite good and it is sometimes faster than hadoop's map job.
But in the Reduce Side job is much slower than hadoop.
i analyze the result and i found the primary reason of slow speed is bad performance in Merging stage.
Would you have any suggestion for this issue
FYI check the blog
2013 May 10
12
Interested in contributing to Lustre
Hi all,
I am a grad student at Carnegie Mellon University. I had my course work in
advanced storage systems in previous semester, and I am interested to work
on Lustre. I prefer to take up a project that could be completed in a
duration of a month or two.
Since I am a novice w.r.t. my familiarity with Lustre code base, I seek
your opinion to choose a project from the list:
2011 Jan 04
5
Allowing puppet to drop privileges for a manifest
Greetings,
Our environment consists of about 600 Redhat Enterprise Linux 3, 4, 5,
and soon 6 servers. We use cfengine 2 currently, but plan on
migrating to puppet. Right now, we have our root-owned cfengine
client running every 15 minutes from cron contacting a single cfservd
server. Additionally, our employees start their own cfengine and
puppet instances on on some servers running under