similar to: mdt backup tar --xattrs question

Displaying 20 results from an estimated 4000 matches similar to: "mdt backup tar --xattrs question"

2012 Nov 02
3
lctl ping of Pacemaker IP
Greetings! I am working with Lustre-2.1.2 on RHEL 6.2. First I configured it using the standard defaults over TCP/IP. Everything worked very nicely usnig a real, static --mgsnode=a.b.c.x value which was the actual IP of the MGS/MDS system1 node. I am now trying to integrate it with Pacemaker-1.1.7. I believe I have most of the set-up completed with a particular exception. The "lctl
2008 Jun 17
4
maximum MDT inode count
For future filesystem compatibility, we are wondering if there are any Lustre MDT filesystems in existence that have 2B or more total inodes? This is fairly unlikely, because it would require an MDT filesystem that is > 8TB in size (which isn''t even supported yet) and/or has been formatted with specific options to increase the total number of inodes. This can be checked with
2008 Jun 17
4
maximum MDT inode count
For future filesystem compatibility, we are wondering if there are any Lustre MDT filesystems in existence that have 2B or more total inodes? This is fairly unlikely, because it would require an MDT filesystem that is > 8TB in size (which isn''t even supported yet) and/or has been formatted with specific options to increase the total number of inodes. This can be checked with
2010 Jul 14
2
tunefs.lustre --print fails on mounted mdt/ost with mmp
Just checking to be sure this isn''t a known bug or problem. I couldn''t find a bz for this, but it would appear that tunefs.lustre --print fails on a lustre mdt or ost device if mounted with mmp. Is this expected behavior? TIA mds1-gps:~ # tunefs.lustre --print /dev/mapper/mdt1 checking for existing Lustre data: not found tunefs.lustre FATAL: Device /dev/mapper/mdt1 has not
2013 Dec 17
2
Setting up a lustre zfs dual mgs/mdt over tcp - help requested
Hi all, Here is the situation: I have 2 nodes MDS1 , MDS2 (10.0.0.22 , 10.0.0.23) I wish to use as failover MGS, active/active MDT with zfs. I have a jbod shelf with 12 disks, seen by both nodes as das (the shelf has 2 sas ports, connected to a sas hba on each node), and I am using lustre 2.4 on centos 6.4 x64 I have created 3 zfs pools: 1. mgs: # zpool
2008 Dec 24
6
Bug when using /dev/cciss/c0d2 as mdt/ost
I am trying to build lustre-1.6.6 against the pre-patched kernel downloaded from SUN. But as written in Operations manual, it creates rpms for 2.6.18-92.1.10.el5_lustrecustom. Is there a way to ask it not to append custom as extraversion. Running kernel is 2.6.18-92.1.10.el5_lustre.1.6.6smp. -- Regards-- Rishi Pathak National PARAM Supercomputing Facility Center for Development of Advanced
2010 Aug 19
3
SSD caching of MDT
Article by Jeff Layton: http://www.linux-mag.com/id/7839 anyone have views on whether this sort of caching would be useful for the MDT? My feeling is that MDT reads are probably pretty random but writes might benefit...? GREG -- Greg Matthews 01235 778658 Senior Computer Systems Administrator Diamond Light Source, Oxfordshire, UK
2010 Jul 08
5
No space left on device on not full filesystem
Hello, We have running lustre 1.8.1 and have met "No space lest on device" error when uploading 500 Gb small files (less then 100 Kb each). The problem seems to depends on the number of files. If we remove one file, we can create one new file, even with Gb size; but if we haven''t remove something we can''t create even very little file, as an example using touch
2013 Apr 16
2
UID/GID access control in Lustre
Hello list members, I started to develop a kernel module which hooks into Lustre 2.3 for controlling data access based on nid and uid/gid. The background is the following: Here at GSI we have currently a reserved uid/gid space which partner institutes are using to access our exported Lustre mounts. However, we currently have no mechanism to control (guaranty) that the reserved uid/gid space are
2008 Feb 05
2
obdfilter/datafs-OST0000/recovery_status
I''m evaluating lustre. I''m trying what I think is a basic/simple ethernet config. with MDT and OST on the same node. Can someone tell me if the following (~150 second recovery occurring when small 190 GB OST is re-mounted) is expected behavior or if I''m missing something? I thought I would send this and continue with the eval while awaiting a response. I''m using
2013 Mar 18
1
OST0006 : inactive device
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows: [code] [root at MDS ~]# lctl list_nids 10.94.214.185 at tcp [root at MDS ~]# [/code] On Lustre Client1: [code] [root at lustreclient1 lustre]# lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0] lustre-OST0000_UUID
2012 Oct 09
1
MDS read-only
Dear all, Two of our MDS have got repeatedly read-only error recently after once e2fsck on lustre 1.8.5. After the MDT mounted for a while, the kernel will reports errors like: Oct 8 20:16:44 mainmds kernel: LDISKFS-fs error (device cciss!c0d1): ldiskfs_ext_check_inode: bad header/extent in inode #50736178: invalid magic - magic 0, entries 0, max 0(0), depth 0(0) Oct 8 20:16:44 mainmds
2007 Nov 19
6
Dedicated MGS?
This may be in the documentation. If so, I missed it. If a site has multiple Lustre file systems, the documentation implies that there only needs to be a single MGS for an entire site (regardless of the number of file systems). However, I also know it is fairly common to have a combined MGS/MDT. So here are the questions. 1. If we are going to have several Lustre file systems,
2013 Mar 18
1
lustre showing inactive devices
I installed 1 MDS , 2 OSS/OST and 2 Lustre Client. My MDS shows: [code] [root at MDS ~]# lctl list_nids 10.94.214.185 at tcp [root at MDS ~]# [/code] On Lustre Client1: [code] [root at lustreclient1 lustre]# lfs df -h UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 4.5G 274.3M 3.9G 6% /mnt/lustre[MDT:0]
2008 Mar 07
2
Multihomed question: want Lustre over IB andEthernet
Chris, Perhaps you need to perform some write_conf like command. I''m not sure if this is needed in 1.6 or not. Shane ----- Original Message ----- From: lustre-discuss-bounces at lists.lustre.org <lustre-discuss-bounces at lists.lustre.org> To: lustre-discuss <lustre-discuss at lists.lustre.org> Sent: Fri Mar 07 12:03:17 2008 Subject: Re: [Lustre-discuss] Multihomed
2012 Sep 27
4
Bad reporting inodes free
Hello, When I run a "df -i" in my clients I get 95% indes used or 5% inodes free: Filesystem Inodes IUsed IFree IUse% Mounted on lustre-mds-01:lustre-mds-02:/cetafs 22200087 20949839 1250248 95% /mnt/data But if I run lfs df -i i get: UUID Inodes IUsed IFree I
2010 Sep 09
1
What's the correct sequence to umount multiple lustre file system
Any recommendation about the sequence to umount multiple lustre file system with combined MGS/MDT or separate MGS, MDT. Thanks. Ming -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.lustre.org/pipermail/lustre-discuss/attachments/20100909/396905b5/attachment.html
2010 Jul 01
6
best practice for lustre clustre startup
Hello, I have recently installed a lustre cluster which is in a test phase now but will potentially be in 24x7 production if its accepted. I would like input from the list on what the recommendations/best practices are for configuration of a lustre cluster startup. Is it advisable to have lustre on the various server pieces (mgs/mdt/oss''s) start automatically? If not why not?
2013 Feb 12
2
Lost folders after changing MDS
OK, so our old MDS had hardware issues so I configured a new MGS / MDS on a VM (this is a backup lustre filesystem and I wanted to separate the MGS / MDS from OSS of the previous), and then did this: For example: mount -t ldiskfs /dev/old /mnt/ost_old mount -t ldiskfs /dev/new /mnt/ost_new rsync -aSv /mnt/ost_old/ /mnt/ost_new # note trailing slash on ost_old/ If you are unable to connect both
2007 Nov 07
9
How To change server recovery timeout
Hi, Our lustre environment is: 2.6.9-55.0.9.EL_lustre.1.6.3smp I would like to change recovery timeout from default value 250s to something longer I tried example from manual: set_timeout <secs> Sets the timeout (obd_timeout) for a server to wait before failing recovery. We performed that experiment on our test lustre installation with one OST. storage02 is our OSS [root at