Displaying 20 results from an estimated 6000 matches similar to: "Samba 3.0.33/3.2.15 AD joined slow initial connect with LDAP backend"
2009 Nov 30
1
Disable local login check Winbind
Does anybody know how to disable that the local login and ssh check
winbind? I'm seeing a delay when trying to login through ssh when
Winbind is not cached yet. I'm not using Winbind for local logins so I'd
like to disable it. In the past when Winbind would be having a problem I
could not logon to the box due to it being checked in the login
procedure.
Winbind is used for AD
2009 Mar 30
1
RPC fault code DCERPC_FAULT_OP_RNG_ERROR
I'm testing out a new Samba setup to hopefully replace my aging Win2k
domain. I've got some of it working:
- My PDC (shadow) seems to be working on the CASA domain with an LDAP
backend.
- nss_ldap and pam_ldap are working on shadow
- I can run wbinfo -u and get the user info from LDAP on shadow.
- I can run wbinfo -a username%password and authenticate a user on shadow.
I can run
2007 May 01
1
Problem with Samba-3.0.25rc3 & idmap_ldap (winbind dumps core)
In an effort to improve my lot, I'm trying to move to a ldap backend
for idmap synchronization when I deploy the new 3.0.25 version on my
systems. In preparation for this, I've set up some test systems --
where I'm having some problems that I think others may be
encountering (according to a few comments I've seen recently).
In a nutshell, I believe I have set up my ldap
2023 Mar 24
1
How to configure?
Can you check your volume file contents?Maybe it really can't find (or access) a specific volfile ?
Best Regards,Strahil Nikolov?
On Fri, Mar 24, 2023 at 8:07, Diego Zuccato<diego.zuccato at unibo.it> wrote: In glfsheal-Connection.log I see many lines like:
[2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021]
[glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the
2023 Mar 21
1
How to configure?
I have no clue. Have you checked for errors in the logs ? Maybe you might find something useful.
Best Regards,Strahil Nikolov?
On Tue, Mar 21, 2023 at 9:56, Diego Zuccato<diego.zuccato at unibo.it> wrote: Killed glfsheal, after a day there were 218 processes, then they got
killed by OOM during the weekend. Now there are no processes active.
Trying to run "heal info" reports
2023 Mar 24
1
How to configure?
In glfsheal-Connection.log I see many lines like:
[2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021]
[glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the
volume file [{from server}, {errno=2}, {error=File o directory non
esistente}]
And *lots* of gfid-mismatch errors in glustershd.log .
Couldn't find anything that would prevent heal to start. :(
Diego
Il 21/03/2023
2023 Mar 24
1
How to configure?
There are 285 files in /var/lib/glusterd/vols/cluster_data ... including
many files with names related to quorum bricks already moved to a
different path (like cluster_data.client.clustor02.srv-quorum-00-d.vol
that should already have been replaced by
cluster_data.clustor02.srv-bricks-00-q.vol -- and both vol files exist).
Is there something I should check inside the volfiles?
Diego
Il
2023 Mar 21
1
How to configure?
Killed glfsheal, after a day there were 218 processes, then they got
killed by OOM during the weekend. Now there are no processes active.
Trying to run "heal info" reports lots of files quite quickly but does
not spawn any glfsheal process. And neither does restarting glusterd.
Is there some way to selectively run glfsheal to fix one brick at a time?
Diego
Il 21/03/2023 01:21,
2023 Mar 21
1
How to configure?
Theoretically it might help.If possible, try to resolve any pending heals.
Best Regards,Strahil Nikolov?
On Thu, Mar 16, 2023 at 15:29, Diego Zuccato<diego.zuccato at unibo.it> wrote: In Debian stopping glusterd does not stop brick processes: to stop
everything (and free the memory) I have to
systemctl stop glusterd
? killall glusterfs{,d}
? killall glfsheal
? systemctl start
2023 Apr 23
1
How to configure?
After a lot of tests and unsuccessful searching, I decided to start from
scratch: I'm going to ditch the old volume and create a new one.
I have 3 servers with 30 12TB disks each. Since I'm going to start a new
volume, could it be better to group disks in 10 3-disk (or 6 5-disk)
RAID-0 volumes to reduce the number of bricks? Redundancy would be given
by replica 2 (still undecided
2008 Jan 02
0
winbind initialization: GetDC got invalid response type 21
Hi all,
I'm running Samba 3.0.28 on CentOS 5.1 as a PDC. I'm having problems
with winbind taking a long to initialize or reconnect to the domain.
For example, starting winbind and then checking the trust secret takes
~30 seconds:
# time /usr/local/samba/bin/wbinfo -t
checking the trust secret via RPC calls succeeded
real 0m34.055s
user 0m0.008s
sys 0m0.019s
In the logs
2009 Nov 19
1
Other troubles
Hello again.
There are some more issues I still couldn't fix, and can't say if it's
only a misunderstanding on my side, something that can't be done or a
bug (I doubt).
1) In our organization we have two "primary" domains (a lot of others,
but they're not interesting here). I tried changing the default
'PERSONALE' (where machine is joined) to
2023 Mar 16
1
How to configure?
Can you restart glusterd service (first check that it was not modified to kill the bricks)?
Best Regards,Strahil Nikolov?
On Thu, Mar 16, 2023 at 8:26, Diego Zuccato<diego.zuccato at unibo.it> wrote: OOM is just just a matter of time.
Today mem use is up to 177G/187 and:
# ps aux|grep glfsheal|wc -l
551
(well, one is actually the grep process, so "only" 550 glfsheal
2023 Mar 16
1
How to configure?
OOM is just just a matter of time.
Today mem use is up to 177G/187 and:
# ps aux|grep glfsheal|wc -l
551
(well, one is actually the grep process, so "only" 550 glfsheal processes.
I'll take the last 5:
root 3266352 0.5 0.0 600292 93044 ? Sl 06:55 0:07
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root 3267220 0.7 0.0 600292 91964 ?
2023 Mar 15
1
How to configure?
If you don't experience any OOM , you can focus on the heals.
284 processes of glfsheal seems odd.
Can you check the ppid for 2-3 randomly picked ?ps -o ppid= <pid>
Best Regards,Strahil Nikolov?
On Wed, Mar 15, 2023 at 9:54, Diego Zuccato<diego.zuccato at unibo.it> wrote: I enabled it yesterday and that greatly reduced memory pressure.
Current volume info:
-8<--
Volume
2009 Oct 09
1
Domain trusts "forgetting" trusted users
I am running Samba ver 3.0.33 on Solaris 10 (sparc) as a PDC with LDAP
for the backend for both samba and unix accounts.
I have also set up a trust with an Windows domain- lets call it
WINDOMAIN- (the PDC for the Windows domain is Win 2003 but is in
mixed mode for backwards compat.) The SAMBA domain trusts the WINDOWS
domain, not not vice versa.
I had also tried setting up trusts with
2023 Mar 15
1
How to configure?
I enabled it yesterday and that greatly reduced memory pressure.
Current volume info:
-8<--
Volume Name: cluster_data
Type: Distributed-Replicate
Volume ID: a8caaa90-d161-45bb-a68c-278263a8531a
Status: Started
Snapshot Count: 0
Number of Bricks: 45 x (2 + 1) = 135
Transport-type: tcp
Bricks:
Brick1: clustor00:/srv/bricks/00/d
Brick2: clustor01:/srv/bricks/00/d
Brick3: clustor02:/srv/bricks/00/q
2023 Mar 15
1
How to configure?
Do you use brick multiplexing ?
Best Regards,Strahil Nikolov?
On Tue, Mar 14, 2023 at 16:44, Diego Zuccato<diego.zuccato at unibo.it> wrote: Hello all.
Our Gluster 9.6 cluster is showing increasing problems.
Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual
thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]),
configured in replica 3
2023 Jun 05
1
Qustionmark in permission and Owner
Seen something similar when FUSE client died, but it marked the whole
mountpoint, not just some files.
Might be a desync or communication loss between the nodes?
Diego
Il 05/06/2023 11:23, Stefan Kania ha scritto:
> Hello,
>
> I have a strange problem on a gluster volume
>
> If I do an "ls -l" in a directory insight a mountet gluster-volume I
> see, only for some
2023 Mar 14
1
How to configure?
Hello all.
Our Gluster 9.6 cluster is showing increasing problems.
Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual
thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]),
configured in replica 3 arbiter 1. Using Debian packages from Gluster
9.x latest repository.
Seems 192G RAM are not enough to handle 30 data bricks + 15 arbiters and
I often had