Thank you Hari,
I have set:
cluster.tier-promote-frequency: 1800
cluster.tier-demote-frequency: 120
I will let you know if it makes a difference after some time. So far (10
minutes), nothing has changed.
I would agree with you, that by looking at the result of 'gluster volume
tier FFPrimary status' it would seem that demoting is happening. However,
for the last 24hrs, nothing has changed in the tier status report except
the time. Could it be stuck? How would I know? Is there a way to restart it
without restarting the cluster?
On Sat, Sep 29, 2018 at 11:08 AM Hari Gowtham <hgowtham at redhat.com>
wrote:
> Hi,
>
> I can see that the demotion is happening from the status provided by you.
> Do verify it.
> I would recommend you to change the cluster.tier-demote-frequency to 120
> and cluster.tier-promote-frequency to 1800 to increase the demotions until
> the
> hot tier is emptied to a certain extent. Later you can use the values
> existing now.
> On Sat, Sep 29, 2018 at 5:39 PM David Brown <dbccemtp at gmail.com>
wrote:
> >
> > Hey Everyone,
> >
> > I have a 3 node GlusterFS cluster that uses NVMe hot tier and a HDD
cold
> tier.
> > I recently ran into some problems when the hot tier became full with
> df-h showing 100%.
> >
> > I did not have a watermark-hi set, but it is my understanding that 90%
> is the default. In an attempt to get the cluster to demote some files, I
> set cluster.watermark-hi: 80 but it is still not demoting.
> >
> >
> > [root at Glus1 ~]# gluster volume info
> >
> > Volume Name: FFPrimary
> > Type: Tier
> > Volume ID: 466ec53c-d1ef-4ebc-8414-d7d070dfe61e
> > Status: Started
> > Snapshot Count: 0
> > Number of Bricks: 9
> > Transport-type: tcp
> > Hot Tier :
> > Hot Tier Type : Replicate
> > Number of Bricks: 1 x 3 = 3
> > Brick1: Glus3:/data/glusterfs/FFPrimary/brick3
> > Brick2: Glus2:/data/glusterfs/FFPrimary/brick2
> > Brick3: Glus1:/data/glusterfs/FFPrimary/brick1
> > Cold Tier:
> > Cold Tier Type : Distributed-Replicate
> > Number of Bricks: 2 x 3 = 6
> > Brick4: Glus1:/data/glusterfs/FFPrimary/brick5
> > Brick5: Glus2:/data/glusterfs/FFPrimary/brick6
> > Brick6: Glus3:/data/glusterfs/FFPrimary/brick7
> > Brick7: Glus1:/data/glusterfs/FFPrimary/brick8
> > Brick8: Glus2:/data/glusterfs/FFPrimary/brick9
> > Brick9: Glus3:/data/glusterfs/FFPrimary/brick10
> > Options Reconfigured:
> > cluster.tier-promote-frequency: 120
> > cluster.tier-demote-frequency: 1800
> > cluster.watermark-low: 60
> > cluster.watermark-hi: 80
> > performance.flush-behind: on
> > performance.cache-max-file-size: 128MB
> > performance.cache-size: 25GB
> > diagnostics.count-fop-hits: off
> > diagnostics.latency-measurement: off
> > cluster.tier-mode: cache
> > features.ctr-enabled: on
> > transport.address-family: inet
> > nfs.disable: on
> > performance.client-io-threads: off
> > [root at Glus1 ~]# gluster volume tier FFPrimary status
> > Node Promoted files Demoted files Status
> run time in h:m:s
> > --------- --------- ---------
> --------- ---------
> > localhost 49 0 in
> progress 5151:30:45
> > Glus2 0 0 in progress
> 5151:30:45
> > Glus3 0 2075 in progress
> 5151:30:47
> > Tiering Migration Functionality: FFPrimary: success
> > [root at Glus1 ~]#
> >
> > What can cause GlusterFS to stop demoting files and allow it to
> completely fill the Hot Tier?
> >
> > Thank you!
> >
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Regards,
> Hari Gowtham.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20180929/5b727a78/attachment.html>