Displaying 6 results from an estimated 6 matches for "lustrefs".
2010 Jul 05
4
Adding OST to online Lustre with quota
...it possible to add OSTs to the Lustre with
quota support without making it offline?
We tried to do this but all quota information was lost. Despite the fact
that OST was formatted with quota support
we are receiving this error message:
Lustre: 3743:0:(lproc_quota.c:447:lprocfs_quota_wr_type())
lustrefs-OST0016: quotaon failed because quota files don''t exist, please
run quotacheck firstly
The message suggest to run the quotacheck command again, but maybe there
is a faster solution.
Could you please someone tell us a proper procedure, or point to a
proper documentation?
Thank you...
2010 Sep 13
2
1.8.4 and write-through cache
...and 400kb/s and all the ost_io threads in D state
(no writes). They would be in this state for 10mins and then suddenly
awake and start pushing data again. 1-2 mins later, they would lock
up again.
The oss''s were dumping stacks all over the place, crawling along and
generally making our lustrefs unuseable.
After trying different kernels, raid card drivers, changing write back
policy on the raid cards etc. the solution was to
lctl set_param obdfilter.*.writethrough_cache_enable=0
lctl set_param obdfilter.*.read_cache_enable=0
on all the nodes with the 3ware cards.
Has anyone els...
2009 Aug 03
1
CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?
...t="1"? man
fenced says this is a safe way especially during startup because it prevents
a data corruption if a node was dead for some reason. From my understanding
CTDB uses CMAN only as "module" to get access to gfs/gfs2 partitions. Or
maybe it is better to look at GPFS and LustreFS?
Could anybody show the working configuration of cluster.conf for
CTDB+GFS2+CMAN?
I used the following cluster.conf and ctd conf:
<?xml version="1.0"?>
<cluster name="smb-cluster" config_version="8">
<fence_daemon clean_start="0" post_fa...
2009 Nov 20
13
Data balance across vdevs
I''m migrating to ZFS and Solaris for cluster computing storage, and have
some completely static data sets that need to be as fast as possible.
One of the scenarios I''m testing is the addition of vdevs to a pool.
Starting out, I populated a pool that had 4 vdevs. Then, I added 3 more
vdevs and would like to balance this data across the pool for
performance. The data may be
2014 May 26
0
Remove filesystem directories from MDT
Hello,
I have some problems in my filesystem. When I browse the filesystem from a client, a specific directory have directories that contain the same directories, in other words:
LustreFS-> dir A -> dir B -> dir B -> dir B -> dir B -> dir B…
This directory, and its children, has the same obdidx/objid:
[root@client vm-106-disk-1.raw]# lfs getstripe vm-106-disk-1.raw
vm-106-disk-1.raw/vm-106-disk-1.raw
lmm_stripe_count: 1
lmm_stripe_size: 1048576
lmm_pattern...
2008 Jun 13
11
Have 2 DomU share a same Logical Volume
Hi xeners,
I am trying to have 2 DomU share a single Logical Volume. I need that
at least one of the DomU has read-write access to the LV. I have
tried, but up to now, it only works if both DomU have read-only writes
to the LV.
That''s a problem for me, because I wanted to set up Nginx on a DomU,
and Rails on another one, and have Nginx get read access to an LV that
would hold the public