Displaying 8 results from an estimated 8 matches for "gluter".
Did you mean:
gluster
2017 Jul 27
2
how far should one go upping gluter version before it can harm samba?
... or in other words - can samba break (on Centos 7.3) if
one goes with gluster version to high?
hi fellas.
I wonder because I see:
smbd[4088153]: Unknown gluster ACL version: -847736808
smbd[4088153]: [2017/07/27 13:12:54.047332, 0]
../source3/modules/vfs_glusterfs.c:1365(gluster_to_smb_acl)
smbd[4088153]: Unknown gluster ACL version: 0
smbd[4088153]: [2017/07/27 13:12:54.162658, 0]
2017 Jul 28
0
how far should one go upping gluter version before it can harm samba?
On 27/07/17 14:13, lejeczek wrote:
> ... or in other words - can samba break (on Centos 7.3) if
> one goes with gluster version to high?
>
> hi fellas.
>
> I wonder because I see:
>
> smbd[4088153]: Unknown gluster ACL version: -847736808
> smbd[4088153]: [2017/07/27 13:12:54.047332, 0]
> ../source3/modules/vfs_glusterfs.c:1365(gluster_to_smb_acl)
>
2013 Oct 08
0
NFS client side - ls command giving input/output error
Hi All,
I have created a distributed volume on one server which is also a DHCP
server
I am booting another blade which gets its kernel image from the DHCP
server via the NFS mount which is a gluter distributed volumed
This volume is successfully getting NFS mounted on the client side, I can
even do cd to the directories contained in the NFS mount, touch, cat and
cp operations on a file in the NFS mount, but when I give ls command on
the NFS mount, it gives "input/error " on the...
2018 Apr 19
1
Meeting minutes : April 18th Maintainers meeting.
...kers equally busy
- Github Label check is now enforced:
- Need help from others to identify the needs for going to give
the flag.
- As ?ndevos? asked, we need to highlight this in Developer Guide
and other places in documentation.
- Can we fix the ?gluter spec? format and ask people to fill the
github issues in that format? So that it becomes easier to
give the flags.
-
[Handled Already]Regression failures
- trash.t and nfs-mount-auth.t are failing frequently
- git bisect shows https://review.gluster.org/19837 as possible...
2018 May 22
1
[SOLVED] [Nfs-ganesha-support] volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...with the
> recommended 3 node setup to avoid this which would include a proper
> quorum?? Or is there more to this and it really doesn't matter if I have
> a 2 node gluster cluster without a quorum and this is due to something
> else still?
>
> Again, anytime I check the gluter volumes, everything checks out.? The
> results of both 'gluster volume info' and 'gluster volume status' is
> always as I pasted above, fully working.
>
> I'm also using the Linux KDC Free IPA with this solution as well.
>
2018 May 08
1
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...need to do here? Should I go with the
recommended 3 node setup to avoid this which would include a proper
quorum? Or is there more to this and it really doesn't matter if I have
a 2 node gluster cluster without a quorum and this is due to something
else still?
Again, anytime I check the gluter volumes, everything checks out. The
results of both 'gluster volume info' and 'gluster volume status' is
always as I pasted above, fully working.
I'm also using the Linux KDC Free IPA with this solution as well.
--
Cheers,
Tom K.
-------------------------------------------...
2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On Wed, Apr 11, 2018 at 4:35 AM, TomK <tomkcpr at mdevsys.com> wrote:
> On 4/9/2018 2:45 AM, Alex K wrote:
> Hey Alex,
>
> With two nodes, the setup works but both sides go down when one node is
> missing. Still I set the below two params to none and that solved my issue:
>
> cluster.quorum-type: none
> cluster.server-quorum-type: none
>
> yes this disables
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/9/2018 2:45 AM, Alex K wrote:
Hey Alex,
With two nodes, the setup works but both sides go down when one node is
missing. Still I set the below two params to none and that solved my issue:
cluster.quorum-type: none
cluster.server-quorum-type: none
Thank you for that.
Cheers,
Tom
> Hi,
>
> You need 3 nodes at least to have quorum enabled. In 2 node setup you
> need to