similar to: possible file corruption

Displaying 20 results from an estimated 2000 matches similar to: "possible file corruption"

2013 Feb 19
2
Xyratex News Regarding Lustre - Press Release
Greetings Community! Today we are very excited to announce that Xyratex has purchased Lustre? and its assets from Oracle. We intend for Lustre to remain an open-source, community-driven file system to be promoted by our community organizations. We undertook the acquisition because we realize its importance to the entire community and we want to help ensure that it will continue to deliver for all
2013 Mar 11
4
Understanding lustre setup ..
Hello, I have been reading http://wiki.lustre.org/images/1/1b/Hadoop_wp_v0.4.2.pdf for setting up Hadoop over lustre. Generally in hadoop setup, we have 1 Namenode and various number of datanodes. If I want to setup the same keeping Lustre as backend, in the document it is mentioned that: ".............Our experiments run on cluster with 8 nodes in total, one is mds/namenode, the rest are
2011 Jun 22
2
Queries regarding Lustre Throughput Numbers with mdtest benchmark
Hi, I have a query regarding Lustre Throughput Numbers with mdtest benchmark.I am running mdtest benhmark with following options :- /home/meshram/mpich2-new/mpich2-1.4/mpich2-install/bin/mpirun -np 256 -hostfile ./hostfile ./mdtest -z 3 -b 10 -I 5 -v -d /tmp/l66 where , mdtest - is the standard benchmark to test metadata operations. [ https://computing.llnl.gov/?set=code&page=sio_downloads
2011 Jun 22
2
Queries regarding Lustre Throughput Numbers with mdtest benchmark
Hi, I have a query regarding Lustre Throughput Numbers with mdtest benchmark.I am running mdtest benhmark with following options :- /home/meshram/mpich2-new/mpich2-1.4/mpich2-install/bin/mpirun -np 256 -hostfile ./hostfile ./mdtest -z 3 -b 10 -I 5 -v -d /tmp/l66 where , mdtest - is the standard benchmark to test metadata operations. [ https://computing.llnl.gov/?set=code&page=sio_downloads
2007 Nov 29
2
Balancing I/O Load
We are seeing some disturbing (probably due to our ignorance) behavior from lustre 1.6.3 right now. We have 8 OSSs with 3 OSTs per OSS (24 physical LUNs). We just created a brand new lustre file system across this configuration using the default mkfs.lustre formatting options. We have this file system mounted across 400 clients. At the moment, we have 63 IOzone threads running
2013 Apr 16
2
UID/GID access control in Lustre
Hello list members, I started to develop a kernel module which hooks into Lustre 2.3 for controlling data access based on nid and uid/gid. The background is the following: Here at GSI we have currently a reserved uid/gid space which partner institutes are using to access our exported Lustre mounts. However, we currently have no mechanism to control (guaranty) that the reserved uid/gid space are
2012 Jun 12
1
OpenSFS/EOFS Booth at ISC12
Coming to Hamburg next week ? The European Open File System (EOFS) and Open Scalable FileSystem (OpenSFS) would love to see you at ISC?12 June 17?-21, 2012, in Hamburg, Germany. Visit our booth (#765) to meet Lustre experts, participate in informative talks, ask hard questions, and join the various events we are hosting. First off, there''s a full scheduled of Lustre talks, each with
2012 Jun 12
1
OpenSFS/EOFS Booth at ISC12
Coming to Hamburg next week ? The European Open File System (EOFS) and Open Scalable FileSystem (OpenSFS) would love to see you at ISC?12 June 17?-21, 2012, in Hamburg, Germany. Visit our booth (#765) to meet Lustre experts, participate in informative talks, ask hard questions, and join the various events we are hosting. First off, there''s a full scheduled of Lustre talks, each with
2012 Jun 12
1
OpenSFS/EOFS Booth at ISC12
Coming to Hamburg next week ? The European Open File System (EOFS) and Open Scalable FileSystem (OpenSFS) would love to see you at ISC?12 June 17?-21, 2012, in Hamburg, Germany. Visit our booth (#765) to meet Lustre experts, participate in informative talks, ask hard questions, and join the various events we are hosting. First off, there''s a full scheduled of Lustre talks, each with
2013 May 10
12
Interested in contributing to Lustre
Hi all, I am a grad student at Carnegie Mellon University. I had my course work in advanced storage systems in previous semester, and I am interested to work on Lustre. I prefer to take up a project that could be completed in a duration of a month or two. Since I am a novice w.r.t. my familiarity with Lustre code base, I seek your opinion to choose a project from the list:
2012 Mar 15
28
Lustre and cross-platform portability
Whamcloud and EMC are jointly investigating how to be able to contribute the Lustre client code into the upstream Linux kernel. As a prerequisite to this, EMC is working to clean up the Lustre client code to better match the kernel coding style, and one of the anticipated major obstacles to upstream kernel submission is the heavy use of code abstraction via libcfs for portability to other
2012 Mar 15
28
Lustre and cross-platform portability
Whamcloud and EMC are jointly investigating how to be able to contribute the Lustre client code into the upstream Linux kernel. As a prerequisite to this, EMC is working to clean up the Lustre client code to better match the kernel coding style, and one of the anticipated major obstacles to upstream kernel submission is the heavy use of code abstraction via libcfs for portability to other
2010 Aug 17
18
write RPC & congestion
Hi, thanks for previous help. I have some question about Lustre RPC and the sequence of events that occur during large concurrent write() involving many processes and large data size per process. I understand there is a mechanism of flow control by credits, but I''m a little unclear on how it works in general after reading the "networking & io protocol" white paper. Is
2013 Mar 18
1
OST0006 : inactive device
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows: [code] [root at MDS ~]# lctl list_nids 10.94.214.185 at tcp [root at MDS ~]# [/code] On Lustre Client1: [code] [root at lustreclient1 lustre]# lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0] lustre-OST0000_UUID
2010 Sep 03
1
Compiling lustre-client 2.0.0.1 on RHEL 4
Hi, I tried to compile lustre-client 2.0.0.1 on RHEL4 with kernel 2.6.9-89.0.28.EL-x86_64 and I got 3 errors and 1 warning during the compile. The compile is executed with -Werror option, and it fails in all 4 cases * Error: lustre_compat25.h CC [M] /usr/src/redhat/BUILD/lustre-2.0.0.1/lustre/fid/fid_handler.o In file included from
2013 Mar 18
1
lustre showing inactive devices
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows: [code] [root at MDS ~]# lctl list_nids 10.94.214.185 at tcp [root at MDS ~]# [/code] On Lustre Client1: [code] [root at lustreclient1 lustre]# lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0]
2013 Dec 17
2
Setting up a lustre zfs dual mgs/mdt over tcp - help requested
Hi all, Here is the situation: I have 2 nodes MDS1 , MDS2 (10.0.0.22 , 10.0.0.23) I wish to use as failover MGS, active/active MDT with zfs. I have a jbod shelf with 12 disks, seen by both nodes as das (the shelf has 2 sas ports, connected to a sas hba on each node), and I am using lustre 2.4 on centos 6.4 x64 I have created 3 zfs pools: 1. mgs: # zpool
2010 Jul 08
5
No space left on device on not full filesystem
Hello, We have running lustre 1.8.1 and have met "No space lest on device" error when uploading 500 Gb small files (less then 100 Kb each). The problem seems to depends on the number of files. If we remove one file, we can create one new file, even with Gb size; but if we haven''t remove something we can''t create even very little file, as an example using touch
2010 Aug 11
3
Failure when mounting Lustre
Hi, I get the following error when I try to mount lustre on the clients. Permanent disk data: Target: lustre-OSTffff Index: unassigned Lustre FS: lustre Mount type: ldiskfs Flags: 0x72 (OST needs_index first_time update ) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=164.107.119.231 at tcp sh: losetup: command not found mkfs.lustre: error 32512 on losetup:
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
Chris, Perhaps you need to perform some write_conf like command. I''m not sure if this is needed in 1.6 or not. Shane ----- Original Message ----- From: lustre-discuss-bounces at lists.lustre.org <lustre-discuss-bounces at lists.lustre.org> To: lustre-discuss <lustre-discuss at lists.lustre.org> Sent: Fri Mar 07 12:03:17 2008 Subject: Re: [Lustre-discuss] Multihomed