similar to: Synchronising password of some AD users with an external LDAP?

Displaying 20 results from an estimated 500 matches similar to: "Synchronising password of some AD users with an external LDAP?"

2018 Aug 28
1
OpenLDAP support in future versions of CentOS
Stephen John Smoogen wrote: > On Tue, 28 Aug 2018 at 14:56, mark <m.roth at 5-cent.us> wrote: > >> >> Patrick Laimbock wrote: >> >>> On 28-08-18 17:51, Alicia Smith wrote: >>> >>>> >>>> I just joined this mailing list, so I apologize in advance if this >>>> topic has already been covered. >>>>
2018 Aug 28
3
OpenLDAP support in future versions of CentOS
Patrick Laimbock wrote: > On 28-08-18 17:51, Alicia Smith wrote: >> >> I just joined this mailing list, so I apologize in advance if this >> topic has already been covered. >> >> Red Hat and Suse announced they are no longer supporting OpenLDAP in >> future releases. https://www.ostechnix.com/redhat-and-suse-announced-to- >>
2011 Mar 16
0
problems creating read-only, 'consumer' dirsrv replica
Hello, I am trying to deploy an additional read-only replica (aka. 'consumer') in a single-master dirsrv environment. The master, and the other pre-existing consumer servers, are all 'fedora-ds' running on Fedora 7. I'm trying to add a consumer running on Centos 5.5. Ultimately, I intend to replace the Fedora ds servers with Centos dirsrv servers. I'm trying to deploy
2011 May 12
4
ApacheDS vs OpenLDAP
Hi all, Wondering if any of you have thoughts/experiences with ApacheDS? We've all had trials and tribulations regarding OpenLDAP and while its basically working pretty well in a master/slave relationship, ApacheDS claims more robust replication, etc... Granted I am working with the version bundled with CentOS, I do understand that the latest OpenLDAP is wayyyyyyy betterrrr :) - Aurf
2018 Aug 28
0
OpenLDAP support in future versions of CentOS
On Tue, 28 Aug 2018 at 14:56, mark <m.roth at 5-cent.us> wrote: > > Patrick Laimbock wrote: > > On 28-08-18 17:51, Alicia Smith wrote: > >> > >> I just joined this mailing list, so I apologize in advance if this > >> topic has already been covered. > >> > >> Red Hat and Suse announced they are no longer supporting OpenLDAP in >
2012 Jul 31
1
Samba 4 install fails, no matter what I do
I can't install Samba 4 in practically any fashion. I've tried Debian packages without much success (see https://lists.samba.org/archive/samba-technical/2012-July/085301.html) I later on figured out that it is not possible to use those packages without using ntvfs (see http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=679678). I've attempted to compile it from source under Debian
2012 Jul 30
1
"make install" fails, can't link libreplace.inst.so
I can compile Samba4 beta 4, but can't install it: root at samba4dc:/usr/src/samba-4.0.0beta4# ./configure.developer <snip> 'configure' finished successfully (49.871s) root at samba4dc:/usr/src/samba-4.0.0beta4# make WAF_MAKE=1 ./buildtools/bin/waf build <snip> Waf: Leaving directory `/usr/src/samba-4.0.0beta4/bin' 'build' finished successfully
2013 May 10
1
Samba 3 member, winbind caching and DC availability
Hello all, I've a box running Samba 3.5.6 (Debian Squeeze) that retrieves its user accounts from AD, using Winbind. The box is receiving incoming mail. Idmap backend is AD, with rfc2307 schema mode. Currently it's only accessing one AD DC, and the MTA on the Samba box is stopped whenever the DC is temporarily offline to prevent rejection of any incoming mail with "user unknown"
2015 Jul 03
3
NT_STATUS_INTERNAL_DB_CORRUPTION messages in log.samba--proper course of action?
Hi all, We've recently migrated from a separate DNS server that was dynamically updated with BIND's update-policy, using a manually generated tkey-gssapi-keytab (plus a second server functioning as an ordinary slave to the first), to BIND9_DLZ. The setup predated Samba's AD DC support and BIND's DLZ support, and was originally established because even though we needed AD, we were
2013 Apr 22
1
New Windows 8 RSAT and "OU=Domain Controllers" support?
Hello, We have two DCs. One runs Windows 2003 R2, and the other Samba 4.0.5. Forest functional level is Windows 2000 native. I recently demoted (worked flawlessy now, which was a great relief), rebuilt and re-promoted my Samba 4 DC, as my problems that I posted to this list about two monts were still unresolved (see https://lists.samba.org/archive/samba/2013-February/171898.html), and I thoght
2013 May 10
1
Sudden authentication failures, hex dumps in log.samba
In a leap of faith, I decided to relax the iptables rules on our Samba DC (4.0.5) on Wednesday, permitting some of our production clients to actually authenticate against it (in addition to our W2k3R2 DC). After all, there are no replication errors and no errors either in log.samba or Windows event log, so things _should've_ been generally working, and various test clients also have had no
2017 Nov 15
2
Help with reconnecting a faulty brick
Le 13/11/2017 ? 21:07, Daniel Berteaud a ?crit?: > > Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?: >> >> Could I just remove the content of the brick (including the >> .glusterfs directory) and reconnect ? >> > > In fact, what would be the difference between reconnecting the brick > with a wiped FS, and using > > gluster volume remove-brick vmstore
2017 Oct 11
5
gluster volume + lvm : recommendation or neccessity ?
Hi everyone, I've read on the gluster & redhat documentation, that it seems recommended to use XFS over LVM before creating & using gluster volumes. Sources : https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/ My point is :
2017 Sep 20
3
Backup and Restore strategy in Gluster FS 3.8.4
> First, please note that gluster 3.8 is EOL and that 3.8.4 is rather old in > the 3.8 release, 3.8.15 is the current (and probably final) release of 3.8. > > "With the release of GlusterFS-3.12, GlusterFS-3.8 (LTM) and GlusterFS- > 3.11 (STM) have reached EOL. Except for serious security issues no > further updates to these versions are forthcoming. If you find a bug please
2018 Jan 23
6
parallel-readdir is not recognized in GlusterFS 3.12.4
Hello, I saw that parallel-readdir was an experimental feature in GlusterFS version 3.10.0, became stable in version 3.11.0, and is now recommended for small file workloads in the Red Hat Gluster Storage Server documentation[2]. I've successfully enabled this on one of my volumes but I notice the following in the client mount log: [2018-01-23 10:24:24.048055] W [MSGID: 101174]
2017 Nov 15
0
Help with reconnecting a faulty brick
On 11/15/2017 12:54 PM, Daniel Berteaud wrote: > > > > Le 13/11/2017 ? 21:07, Daniel Berteaud a ?crit?: >> >> Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?: >>> >>> Could I just remove the content of the brick (including the >>> .glusterfs directory) and reconnect ? >>> >> If it is only the brick that is faulty on the bad node,
2017 Nov 16
2
Help with reconnecting a faulty brick
Le 15/11/2017 ? 09:45, Ravishankar N a ?crit?: > If it is only the brick that is faulty on the bad node, but everything > else is fine, like glusterd running, the node being a part of the > trusted storage pool etc,? you could just kill the brick first and do > step-13 in "10.6.2. Replacing a Host Machine with the Same Hostname", > (the mkdir of non-existent dir,
2018 Jan 24
1
Split brain directory
Hello, I'm trying to fix an issue with a Directory Split on a gluster 3.10.3. The effect consist of a specific file in this splitted directory to randomly be unavailable on some clients. I have gathered all the informations on this gist: https://gist.githubusercontent.com/lucagervasi/534e0024d349933eef44615fa8a5c374/raw/52ff8dd6a9cc8ba09b7f258aa85743d2854f9acc/splitinfo.txt I discovered the
2013 Jun 08
3
Virtualization in RHEL
Hello Friends I need a guide to virtualization in RHEL. I tried many ways but it always gives me a different error, and in fact is very difficult and I know what I'm doing wrong. What I want to achieve is to install a virtual machine from a ks.cfg on RHEL 6 If someone is holding appreciate some guidance about whether the share. I've searched Google, Youtube, and other pages about it,
2023 Oct 25
1
Replace faulty host
Hi all, I have a problem with one of our gluster clusters. This is the setup: Volume Name: gds-common Type: Distributed-Replicate Volume ID: 42c9fa00-2d57-4a58-b5ae-c98c349cfcb6 Status: Started Snapshot Count: 26 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: urd-gds-031:/urd-gds/gds-common Brick2: urd-gds-032:/urd-gds/gds-common Brick3: urd-gds-030:/urd-gds/gds-common