Displaying 20 results from an estimated 200 matches similar to: "Domain users can't browse or access shares"
2015 Feb 09
0
Domain users can't browse or access shares
On 09/02/15 14:56, sk at green.no wrote:
> Dear mail list,
>
> I have a Ubuntu server which I upgraded from 11.10 to current LTS 14.4
> running kernel: Linux bgo-nfs01 3.13.0-44-generic #73-Ubuntu SMP Tue Dec
> 16 00:22:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux.
> Filename: pool/main/s/samba/samba_4.1.6+dfsg-1ubuntu2_amd64.deb
>
> At the same time the purpose was changed
2015 Feb 09
3
Domain users can't browse or access shares
On 09/02/15 19:18, sk at green.no wrote:
> -----samba-bounces at lists.samba.org wrote: -----
>
>> To: samba at lists.samba.org
>> From: Rowland Penny
>> Sent by: samba-bounces at lists.samba.org
>> Date: 02/09/2015 05:12PM
>> Subject: Re: [Samba] Domain users can't browse or access shares
>>
>> OK, as I thought, your smb.conf is setup to use the
2015 Feb 09
3
Domain users can't browse or access shares
> From: Rowland Penny <rowlandpenny at googlemail.com>
> To: samba at lists.samba.org
> Date: 09.02.2015 16:09
> OK, Does 'getent passwd sktest' show anything ?
>
> I am willing to bet it doesn't.
You're bet is correct, wbinfo -u and wbinfo -g give expected results
though.
2015 Feb 12
0
Domain users can't browse or access shares
samba-bounces at lists.samba.org wrote on 09.02.2015 20:52:43:
> OK, make the [global] part of your smb.conf look like this:
>
> [global]
> netbios name = bgo-nfs01
> workgroup = GREENREEFERS
> security = ADS
> realm = GREENREEFERS.NO
> dedicated keytab file = /etc/krb5.keytab
> kerberos method = secrets and keytab
2015 Feb 14
3
Domain users can't browse or access shares
You are using idmap module rid for your domain. I think getent passwd could not resolve anything because of your id range. I would try a range of 1000 (one thousand)-99999 and see what happens.
New users in AD start with a rid of 1000. Well known Users like administrator got their rid starting in the 500 range.
You should think of using rfc2307.
Regards
Tim
Am 12. Februar 2015 10:51:47 MEZ,
2015 Feb 12
2
Domain users can't browse or access shares
On 12/02/15 09:51, sk at green.no wrote:
> samba-bounces at lists.samba.org wrote on 09.02.2015 20:52:43:
>
>> OK, make the [global] part of your smb.conf look like this:
>>
>> [global]
>> netbios name = bgo-nfs01
>> workgroup = GREENREEFERS
>> security = ADS
>> realm = GREENREEFERS.NO
>>
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On 3/19/2018 10:52 AM, Rik Theys wrote:
> Hi,
>
> On 03/19/2018 03:42 PM, TomK wrote:
>> On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
>> Removing NFS or NFS Ganesha from the equation, not very impressed on my
>> own setup either.? For the writes it's doing, that's alot of CPU usage
>> in top. Seems bottle-necked via a single execution core somewhere trying
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All,
In a two node glusterfs setup, with one node down, can't use the second
node to mount the volume. I understand this is expected behaviour?
Anyway to allow the secondary node to function then replicate what
changed to the first (primary) when it's back online? Or should I just
go for a third node to allow for this?
Also, how safe is it to set the following to none?
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hi,
You need 3 nodes at least to have quorum enabled. In 2 node setup you need
to disable quorum so as to be able to still use the volume when one of the
nodes go down.
On Mon, Apr 9, 2018, 09:02 TomK <tomkcpr at mdevsys.com> wrote:
> Hey All,
>
> In a two node glusterfs setup, with one node down, can't use the second
> node to mount the volume. I understand this is
2018 May 08
1
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/11/2018 11:54 AM, Alex K wrote:
Hey Guy's,
Returning to this topic, after disabling the the quorum:
cluster.quorum-type: none
cluster.server-quorum-type: none
I've ran into a number of gluster errors (see below).
I'm using gluster as the backend for my NFS storage. I have gluster
running on two nodes, nfs01 and nfs02. It's mounted on /n on each host.
The path /n is
2006 Jun 28
1
Strange stuff in autogen.sh
Hi,
In compiz' autogen.sh, there are these lines:
# work around bgo 323968
ln -s ../po config/po
intltoolize --force --copy --automake || exit 1
rm config/po
Now, I'm not really familiar with that intltoolize/autotools/etc.-stuff, but
am courious what this is for?
What does bgo 323968 mean? Is this a bug number? It isn't in the fdo-bugzilla.
The reason I'm investigating this
2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On Wed, Apr 11, 2018 at 4:35 AM, TomK <tomkcpr at mdevsys.com> wrote:
> On 4/9/2018 2:45 AM, Alex K wrote:
> Hey Alex,
>
> With two nodes, the setup works but both sides go down when one node is
> missing. Still I set the below two params to none and that solved my issue:
>
> cluster.quorum-type: none
> cluster.server-quorum-type: none
>
> yes this disables
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/9/2018 2:45 AM, Alex K wrote:
Hey Alex,
With two nodes, the setup works but both sides go down when one node is
missing. Still I set the below two params to none and that solved my issue:
cluster.quorum-type: none
cluster.server-quorum-type: none
Thank you for that.
Cheers,
Tom
> Hi,
>
> You need 3 nodes at least to have quorum enabled. In 2 node setup you
> need to
2017 Aug 25
2
Rolling upgrade from 3.6.3 to 3.10.5
Hi all,
I'm currently in process of upgrading a replicated cluster (1 x 4) from
3.6.3 to 3.10.5. The nodes run CentOS 6. However after upgrading the first
node, the said node fails to connect to other peers (as seen via 'gluster
peer status'), but somehow other non-upgraded peers can still see the
upgraded peer as connected.
Writes to the Gluster volume via local mounts of
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
Removing NFS or NFS Ganesha from the equation, not very impressed on my
own setup either. For the writes it's doing, that's alot of CPU usage
in top. Seems bottle-necked via a single execution core somewhere trying
to facilitate read / writes to the other bricks.
Writes to the gluster FS from within one of the gluster participating
bricks:
2018 May 22
1
[SOLVED] [Nfs-ganesha-support] volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All,
Appears I solved this one and NFS mounts now work on all my clients. No
issues since fixing it a few hours back.
RESOLUTION
Auditd is to blame for the trouble. Noticed this in the logs on 2 of
the 3 NFS servers (nfs01, nfs02, nfs03):
type=AVC msg=audit(1526965320.850:4094): avc: denied { write } for
pid=8714 comm="ganesha.nfsd" name="nfs_0"
2017 Aug 25
0
Rolling upgrade from 3.6.3 to 3.10.5
You cannot do a rolling upgrade from 3.6.x to 3.10.x You will need downtime.
Even 3.6 to 3.7 was not possible... see some references to it below:
https://marc.info/?l=gluster-users&m=145136214452772&w=2
https://gluster.readthedocs.io/en/latest/release-notes/3.7.1/
# gluster volume set <volname> server.allow-insecure on Edit
/etc/glusterfs/glusterd.vol to contain this line: option
2017 Aug 25
2
Rolling upgrade from 3.6.3 to 3.10.5
Hi Diego,
Just to clarify, so did you do an offline upgrade with an existing cluster
(3.6.x => 3.10.x)?
Thanks.
On Fri, Aug 25, 2017 at 8:42 PM, Diego Remolina <dijuremo at gmail.com> wrote:
> I was never able to go from 3.6.x to 3.7.x without downtime. Then
> 3.7.x did not work well for me, so I stuck with 3.6.x until recently.
> I went from 3.6.x to 3.10.x but downtime was
2018 Mar 19
3
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi,
On 03/19/2018 03:42 PM, TomK wrote:
> On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
> Removing NFS or NFS Ganesha from the equation, not very impressed on my
> own setup either.? For the writes it's doing, that's alot of CPU usage
> in top. Seems bottle-necked via a single execution core somewhere trying
> to facilitate read / writes to the other bricks.
>
>
2017 Aug 25
0
Rolling upgrade from 3.6.3 to 3.10.5
Yes, I did an offline upgrade.
1. Stop all clients using gluster servers.
2. Stop glusterfsd and glusterd on both servers.
3. Backed up /var/lib/gluster* in all servers just to be safe.
4. Upgraded all servers from 3.6.x to 3.10.x (I did not have quotas or
anything that required special steps)
5. Started gluster daemons again and confirmed everything was fine
prior to letting clients connect.
5.