So this change of the Gluster Volume Plugin will make it into K8s 1.7 or
1.8. Unfortunately too late for me.
Does anyone know how to disable performance translators by default?
Raghavendra Talur <rtalur at redhat.com> schrieb am Mi., 24. Mai 2017,
19:30:
> On Wed, May 24, 2017 at 4:08 PM, Christopher Schmidt <fakod666 at
gmail.com>
> wrote:
> >
> >
> > Vijay Bellur <vbellur at redhat.com> schrieb am Mi., 24. Mai
2017 um 05:53
> Uhr:
> >>
> >> On Tue, May 23, 2017 at 1:39 AM, Christopher Schmidt <
> fakod666 at gmail.com>
> >> wrote:
> >>>
> >>> OK, seems that this works now.
> >>>
> >>> A couple of questions:
> >>> - What do you think, are all these options necessary for
Kafka?
> >>
> >>
> >> I am not entirely certain what subset of options will make it work
as I
> do
> >> not understand the nature of failure with Kafka and the default
gluster
> >> configuration. It certainly needs further analysis to identify the
list
> of
> >> options necessary. Would it be possible for you to enable one
option
> after
> >> the other and determine the configuration that ?
> >>
> >>
> >>>
> >>> - You wrote that there have to be kind of application
profiles. So to
> >>> find out, which set of options work is currently a matter of
testing
> (and
> >>> hope)? Or are there any experiences for MongoDB / ProstgreSQL
/
> Zookeeper
> >>> etc.?
> >>
> >>
> >> Application profiles are work in progress. We have a few that are
> focused
> >> on use cases like VM storage, block storage etc. at the moment.
> >>
> >>>
> >>> - I am using Heketi and Dynamik Storage Provisioning together
with
> >>> Kubernetes. Can I set this volume options somehow by default
or by
> volume
> >>> plugin?
> >>
> >>
> >>
> >> Adding Raghavendra and Michael to help address this query.
> >
> >
> > For me it would be sufficient to disable some (or all) translators,
for
> all
> > volumes that'll be created, somewhere here:
> > https://github.com/gluster/gluster-containers/tree/master/CentOS
> > This is the container used by the GlusterFS DaemonSet for Kubernetes.
>
> Work is in progress to give such option at volume plugin level. We
> currently have a patch[1] in review for Heketi that allows users to
> set Gluster options using heketi-cli instead of going into a Gluster
> pod. Once this is in, we can add options in storage-class of
> Kubernetes that pass down Gluster options for every volume created in
> that storage-class.
>
> [1] https://github.com/heketi/heketi/pull/751
>
> Thanks,
> Raghavendra Talur
>
> >
> >>
> >>
> >> -Vijay
> >>
> >>
> >>
> >>>
> >>>
> >>> Thanks for you help... really appreciated.. Christopher
> >>>
> >>> Vijay Bellur <vbellur at redhat.com> schrieb am Mo., 22.
Mai 2017 um
> 16:41
> >>> Uhr:
> >>>>
> >>>> Looks like a problem with caching. Can you please try by
disabling all
> >>>> performance translators? The following configuration
commands would
> disable
> >>>> performance translators in the gluster client stack:
> >>>>
> >>>> gluster volume set <volname> performance.quick-read
off
> >>>> gluster volume set <volname> performance.io-cache
off
> >>>> gluster volume set <volname>
performance.write-behind off
> >>>> gluster volume set <volname>
performance.stat-prefetch off
> >>>> gluster volume set <volname> performance.read-ahead
off
> >>>> gluster volume set <volname>
performance.readdir-ahead off
> >>>> gluster volume set <volname> performance.open-behind
off
> >>>> gluster volume set <volname>
performance.client-io-threads off
> >>>>
> >>>> Thanks,
> >>>> Vijay
> >>>>
> >>>>
> >>>>
> >>>> On Mon, May 22, 2017 at 9:46 AM, Christopher Schmidt
> >>>> <fakod666 at gmail.com> wrote:
> >>>>>
> >>>>> Hi all,
> >>>>>
> >>>>> has anyone ever successfully deployed a Kafka
(Cluster) on GlusterFS
> >>>>> volumes?
> >>>>>
> >>>>> I my case it's a Kafka Kubernetes-StatefulSet and
a Heketi GlusterFS.
> >>>>> Needless to say that I am getting a lot of filesystem
related
> >>>>> exceptions like this one:
> >>>>>
> >>>>> Failed to read `log header` from file channel
> >>>>> `sun.nio.ch.FileChannelImpl at 67afa54a`. Expected to
read 12 bytes,
> but
> >>>>> reached end of file after reading 0 bytes. Started
read from position
> >>>>> 123065680.
> >>>>>
> >>>>> I limited the amount of exceptions with the
> >>>>> log.flush.interval.messages=1 option, but not all...
> >>>>>
> >>>>> best Christopher
> >>>>>
> >>>>>
> >>>>> _______________________________________________
> >>>>> Gluster-users mailing list
> >>>>> Gluster-users at gluster.org
> >>>>>
http://lists.gluster.org/mailman/listinfo/gluster-users
> >>>>
> >>>>
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20170525/fe96c569/attachment.html>