similar to: Likelihood of deviation

Displaying 20 results from an estimated 5000 matches similar to: "Likelihood of deviation"

2023 Feb 13
1
ctdb tcp kill: remaining connections
Hello, we are using ctdb 4.15.5 on RHEL8 (Kernel 4.18.0-372.32.1.el8_6.x86_64) to provide NFS v3 (via tcp) to RHEL7/8 clients. Whenever an ip takeover happens most clients report something like this: [Mon Feb 13 12:21:22 2023] nfs: server x.x.253.252 not responding, still trying [Mon Feb 13 12:21:28 2023] nfs: server x.x.253.252 not responding, still trying [Mon Feb 13 12:22:31 2023] nfs: server
2023 Feb 15
1
ctdb tcp kill: remaining connections
Hi Uli, [Sorry for slow response, life is busy...] On Mon, 13 Feb 2023 15:06:26 +0000, Ulrich Sibiller via samba <samba at lists.samba.org> wrote: > we are using ctdb 4.15.5 on RHEL8 (Kernel > 4.18.0-372.32.1.el8_6.x86_64) to provide NFS v3 (via tcp) to RHEL7/8 > clients. Whenever an ip takeover happens most clients report > something like this: > [Mon Feb 13 12:21:22
2023 Feb 16
1
ctdb tcp kill: remaining connections
On Thu, 16 Feb 2023 17:30:37 +0000, Ulrich Sibiller <ulrich.sibiller at atos.net> wrote: > Martin Schwenke schrieb am 15.02.2023 23:23: > > OK, this part looks kind-of good. It would be interesting to know how > > long the entire failover process is taking. > > What exactly would you define as the begin and end of the failover? From "Takeover run
2020 Aug 08
1
CTDB question about "shared file system"
On Sat, Aug 8, 2020 at 2:52 AM Martin Schwenke <martin at meltin.net> wrote: > Hi Bob, > > On Thu, 6 Aug 2020 06:55:31 -0400, Robert Buck <robert.buck at som.com> > wrote: > > > And so we've been rereading the doc on the public addresses file. So it > may > > be we have gravely misunderstood the *public_addresses* file, we never > read > >
2020 Aug 06
2
CTDB question about "shared file system"
Very helpful. Thank you, Martin. I'd like to share the information below with you and solicit your fine feedback :-) I provide additional detail in case there is something else you feel strongly we should consider. We made some changes last night, let me share those with you. The error that is repeating itself and causing these failures is: Takeover run starting RELEASE_IP 10.200.1.230
2017 Jan 24
2
public ip is assigned to us but not on an interface - error
hi all I had a working cluster, very basic, standard. I'm not sure if recent updates broke it. I see these: 2017/01/24 22:20:05.025164 [recoverd: 3474]: Public IP '10.5.10.51' is assigned to us but not on an interface 2017/01/24 22:20:05.027571 [recoverd: 3474]: Trigger takeoverrun 2017/01/24 22:20:05.053386 [recoverd: 3474]: Takeover run starting 2017/01/24 22:20:05.106044 [
2023 Jan 26
1
ctdb samba and winbind event problem
Hi to all, I'm having a CTDB-Cluster with two nodes (both Ubuntu wit Sernet-packages 4.17.4). Now I want to replace one of the nodes. The first step was to bring a new node to the CTDB-Cluster. This time a Debian 11 but with the same sernet-packages (4.17.4). Adding the new node to /etc/ctdb/nodes at the end of the list. And the virtual IP to /etc/ctdb/public_addresses, also at the end
2008 Feb 12
1
CTDB and LDAP: anyone?
Hi there, I am looking into using CTDB between a PDC and a BDC. I assume this is possible! However I have a few questions: 1: Do I have to use tdb2 as an Idmap backend? Can I not stay with ldap? (from the CTDB docs: A clustered Samba install must set some specific configuration parameters clustering = yes idmap backend = tdb2 private dir = /a/directory/on/your/cluster/filesystem It is
2023 Feb 16
1
ctdb tcp kill: remaining connections
Martin Schwenke schrieb am 15.02.2023 23:23: > Hi Uli, > > [Sorry for slow response, life is busy...] Thanks for answering anyway! > On Mon, 13 Feb 2023 15:06:26 +0000, Ulrich Sibiller via samba > OK, this part looks kind-of good. It would be interesting to know how > long the entire failover process is taking. What exactly would you define as the begin and end of the
2008 Nov 06
2
Cluster Heart Beat Using Cross Over Cable
Hi, I am running two node active/passive cluster running Centos3 update 8 64 bit on Hp Box with external hp storage connected via scsi. My cluster was running fine for last 3 years.But all of a sudden cluster service keep on shifting (atleast one time in a day )form one node to another. After analysed the syslog i found that due to some network fluctuation service was getting shifted.Both
2007 Oct 10
3
failover with conntrackd
Hi. Is anyone using conntrack-tools to implement gateway failover on a network with windows clients? I set it up with ucarp and keepalived, and found that gratuitous ARP doesn''t always seem to update the cache on Windows machines. It works the first time, but if a second failover happens, the client continues to send stuff to the wrong MAC address. Linux machines work fine.
2000 Jan 10
1
Samba on an AIX - HACMP Cluster
Hi! Has anybody experience in running Samba on an IBM AIX High Availability Cluster with 2 nodes? At the moment we are using NFS + Maestro NFS-Client for NT and want to change to a server software using SMB (meaning either IBM FastConnect or Samba). The server is supposed to be a file server for about 1000 Users. We don't need advanced futures like load balancing or take over of active
2020 Feb 10
2
ctdb failover interrupts Windows file copy
Hello, We have setup ctdb+Samba v 4.11.1 and testing with Windows client and failover works mostly ok. However when using regular Windows file copy, the copy operation is interrupted during ip takeover. Is there any solution to make the failover transparent to Windows file copy ? I understand that SMB2 durable handles should not be used in a clustered setup, and SMB3 persistent handles +
2006 Oct 16
2
usb stopped working
i think that the latest kernel has broken usb on my amd box. the modprobe for ohci_hcd fails now. i have tried up and smp. i am going to drop back a version on the kernel to see if that does fix it. anyone else seeing anything similar? ohci_hcd: 2004 Feb 02 USB 1.1 'Open' Host Controller (OHCI) Driver (PCI) ACPI: PCI interrupt 0000:00:02.0[A] -> GSI 22 (level, low) -> IRQ 185
2020 Sep 18
2
Samba impact of "ZeroLogin" CVE-2020-1472
On Fri, 2020-09-18 at 15:39 +0200, Marco Gaiarin via samba wrote: > Mandi! Karolin Seeger via samba > In chel di` si favelave... > > > (Both as classic/NT4-style and active direcory DC.) > > I've searched some info on impact of this bug on NT domains, finding > nothing on the net. > > OK, NT domain are dead, i know, but... i seek some feedback. > On real
2006 Apr 24
1
Stateful Takeover in a Cluster environment
Hey I am new to Samba and have a few queries. How can you archieve "Stateful Takeover" for a Samba Session My goal is to get a samba service running over a cluster. For the client it is transparent to witch server he connects. If a node in the cluster dies, the connection will move with all the states over to another node. I know Samba 3 is not clusteraware, perhaps anybody knows
1999 Jan 27
3
Samba, nmbd, HA
Has anyone ever setup a pair of fileservers running Samba such that each server can takeover the other server's functionality and identity? It seems to me that a Samba HA setup would be very similar to a virtual server Samba setup. In other words, use the 'include' directive in conjunction with the %L variable to make each smbd process act as the server named by the client. So far so
2019 Feb 25
2
glusterfs + ctdb + nfs-ganesha , unplug the network cable of serving node, takes around ~20 mins for IO to resume
Hi all We did some failover/failback tests on 2 nodes��A and B�� with architecture 'glusterfs + ctdb(public address) + nfs-ganesha'�� 1st: During write, unplug the network cable of serving node A ->NFS Client took a few seconds to recover to conitinue writing. After some minutes, plug the network cable of serving node A ->NFS Client also took a few seconds to recover
2019 May 16
2
CTDB node stucks in " ctdb-eventd[13184]: 50.samba: samba not listening on TCP port 445"
Hi everybody, I just updated my ctdb node from Samba version 4.9.4-SerNet-Debian-11.stretch to Samba version 4.9.8-SerNet-Debian-13.stretch. After restarting the sernet-samba-ctdbd service the node doesn't come back and remains in state "UNHEALTHY". I can find that in the syslog: May 16 11:25:40 ctdb-lbn1 ctdb-eventd[13184]: 50.samba: samba not listening on TCP port 445 May 16
2020 Jul 08
3
Urgent Help required
On July 8, 2020 11:01:20 AM AKDT, Alexander Dalloz <ad+lists at uni-x.org> wrote: >Am 08.07.2020 um 20:28 schrieb Kishore Potnuru: >> Thank you for the reply. >> >> As per our current infrastructure, I can go maximum of the redhat 7.7 >> version. Not more than that. Am I able to install or upgrade to >dovecot 2.3 >> version in redhat 7.7? I am running