Lior Goikhburg
2008-Dec-17 08:57 UTC
[Gluster-users] Problem with free space limits on servers.
Got unify over AFR with scheduler ALU. (gluster 1.4 rc1). volume files-afr1 type cluster/afr subvolumes client1 client2 end-volume volume files-afr2 type cluster/afr subvolumes client3 client4 end-volume volume ns-afr type cluster/afr subvolumes ns1 ns2 end-volume volume files-unify type cluster/unify option namespace ns-afr subvolumes files-afr1 files-afr2 option self-heal background option scheduler alu option alu.limits.min-free-disk 20% Did a test to completely fill up the space on mounted drive, to see what happens. All nodes got filled up to 100% despite the limit of 20% on nodes. Why is this happening ?
Basavanagowda Kanur
2008-Dec-17 12:42 UTC
[Gluster-users] Problem with free space limits on servers.
option alu.limits.min-free-disk 20% specifies to unify that stop scheduling new files (create, symlink, mknod) to a subvolume once the free disk space on the subvolume reaches upto 80% of total disk space. scheduler does not guarantee that the already created files won't grow in size. in fact, min-free-disk option stops scheduling to the subvolumes to allow the already existing files to grow. -- gowda On Wed, Dec 17, 2008 at 2:27 PM, Lior Goikhburg <glior at hh.ru> wrote:> Got unify over AFR with scheduler ALU. (gluster 1.4 rc1). > > volume files-afr1 > type cluster/afr > subvolumes client1 client2 > end-volume > > volume files-afr2 > type cluster/afr > subvolumes client3 client4 > end-volume > > volume ns-afr > type cluster/afr > subvolumes ns1 ns2 > end-volume > > volume files-unify > type cluster/unify > option namespace ns-afr > subvolumes files-afr1 files-afr2 > option self-heal background > option scheduler alu > option alu.limits.min-free-disk 20% > > Did a test to completely fill up the space on mounted drive, to see what > happens. > All nodes got filled up to 100% despite the limit of 20% on nodes. > Why is this happening ? > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users >-- hard work often pays off after time, but laziness always pays off now -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20081217/16f642bc/attachment.html>
Lior Goikhburg
2008-Dec-19 10:06 UTC
[Gluster-users] Problem with free space limits on servers.
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1"
http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
<meta content="text/html;charset=ISO-8859-1"
http-equiv="Content-Type">
<title></title>
I have set up a simple test that copies new files to the gluster
mount. The files are small and doen''t grow. Also the file names are
unique so there are no overwrites.<br>
The client starts showing the following log entries:<br>
<br>
<br>
2008-12-18 19:20:09 W [alu.c:892:alu_scheduler] alu: No node is
eligible to schedule<br>
<br>
<br>
However it doesn''t enforce the limit, I can still copy files, untill
all the servers have their partitions fully filled.<br>
<br>
Is this a normal behavior ?<br>
Is there a way to enforce the limit, i.e. get <no space left on
device> type of error to the process that tries to write.<br>
<br>
Lior.<br>
<br>
Basavanagowda Kanur wrote:
<blockquote
cite="mid:50604af20812170442w523dba18sa70a123534ca21e7@mail.gmail.com"
type="cite"><span style="font-family: courier
new,monospace;">option
alu.limits.min-free-disk 20%</span><br>
<br>
specifies to unify that stop scheduling new files (create, symlink,
mknod) to a subvolume once the free disk space on the subvolume reaches
upto 80% of total disk space. scheduler does not guarantee that the
already created files won''t grow in size. <br>
<br>
in fact, min-free-disk option stops scheduling to the subvolumes to
allow the already existing files to grow.<br>
<br>
--<br>
gowda<br>
<br>
<br>
<div class="gmail_quote">On Wed, Dec 17, 2008 at 2:27 PM, Lior
Goikhburg <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:glior@hh.ru">glior@hh.ru</a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt
0.8ex; padding-left: 1ex;">Got
unify over AFR with scheduler ALU. (gluster 1.4 rc1).<br>
<br>
volume files-afr1<br>
type cluster/afr<br>
subvolumes client1 client2<br>
end-volume<br>
<br>
volume files-afr2<br>
type cluster/afr<br>
subvolumes client3 client4<br>
end-volume<br>
<br>
volume ns-afr<br>
type cluster/afr<br>
subvolumes ns1 ns2<br>
end-volume<br>
<br>
volume files-unify<br>
type cluster/unify<br>
option namespace ns-afr<br>
subvolumes files-afr1 files-afr2<br>
option self-heal background<br>
option scheduler alu<br>
option alu.limits.min-free-disk 20%<br>
<br>
Did a test to completely fill up the space on mounted drive, to see
what<br>
happens.<br>
All nodes got filled up to 100% despite the limit of 20% on nodes.<br>
Why is this happening ?<br>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a moz-do-not-send="true"
href="http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users"
target="_blank">http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users</a><br>
</blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
hard work often pays off after time, but laziness always pays off now<br>
</blockquote>
</body>
</html>