Displaying 20 results from an estimated 23 matches for "sibilants".
Did you mean:
sibilance
2023 Mar 09
1
ctdb tcp kill: remaining connections
Martin Schwenke schrieb am 01.03.2023 23:53:
> Not perfect, but better...
Yes, I am quite happy with the ctdb_killtcp.
> For ctdb_killtcp, when it was basically rewritten, we considered adding
> options for max_attempts, but decided to see if it was foolproof. We
> could now add those options. Patches welcome too...
I'll have a look.
> MonitorTimeoutCount defaults to 20
2024 Oct 16
1
ctdb tcp kill: remaining connections
Hi Ulrich,
[Reviving an old thread - I owe you an answer :-)]
On Thu, 9 Mar 2023 17:02:15 +0000, Ulrich Sibiller via samba
<samba at lists.samba.org> wrote:
> Martin Schwenke schrieb am 01.03.2023 23:53:
> > On Wed, 1 Mar 2023 16:18:58 +0000, Ulrich Sibiller
<ulrich.sibiller at atos.net> wrote:
> > > which ignores the port and thus matches all connections for
2024 Oct 17
1
ctdb tcp kill: remaining connections
Hi Uli,
On Wed, 16 Oct 2024 15:18:13 +0000, Ulrich Sibiller
<ulrich.sibiller at eviden.com> wrote:
> Martin Schwenke schrieb am 16.10.2024 04:33:
> > In this old thread, we also discussed problems with ctdb_killtcp. The
> > patch series containing the above change also adds a script option to
> > enable use of "ss -K" for resetting TCP connections to a
2023 Nov 15
1
understanding stat cache
Hi,
On 11/15/23 09:33, Ulrich Sibiller wrote:
> Thanks for that hint about case sensitivity's performance penalty.
>
> For clarifaction: The user is doing mainly reads, so does the
> "create" you mention also cover opening/reading files?
no.
Is it a metadata heave small file workload? That will likely be somewhat
slower compared to Windows, but not in the order of
2023 Nov 15
1
understanding stat cache
Hello Ralph,
Thanks for that hint about case sensitivity's performance penalty.
For clarifaction: The user is doing mainly reads, so does the "create" you mention also cover opening/reading files? If only _creation_ of files is suffering from that we probably have some other/further performance issue.
We have gpfs, which does not offer a case-insensitive mode, neither does the
2023 Nov 13
1
understanding stat cache
Hello Ulrich,
On 11/10/23 13:47, Ulrich Sibiller via samba wrote:
> We have a user that switched from Linux to Windows with his
> engineering software. Previously he was using NFS to access data and
> there were no performance complaints.
>
> Now, with Windows, the same procedures take minutes instead of
> seconds.
the classic workload where Samba performance sucks is when
2025 May 28
1
ctdb tcp kill: remaining connections
Martin Schwenke schrieb am 17.10.2024 13:00:
>> Thanks! I hope to being able to use a current version soon.
>
> Of course, I meant the next minor version (e.g. 4.22.x), since none of
> this is really bug fixes...
Unfortunately I am still not able to run the current version, but for this problem it should not matter because the current code is unchanged in that regard:
We are
2023 Nov 10
2
understanding stat cache
Hello,
We have a user that switched from Linux to Windows with his engineering software. Previously he was using NFS to access data and there were no performance complaints.
Now, with Windows, the same procedures take minutes instead of seconds.
I created some log files in Samba 4.10.16-25.el7_9 (recompiled with gpfs support and using the gpfs vfs module) with debuglevel 10 to see if
2024 Oct 15
1
ctdb tcp settings for statd failover
Hi,
In current (6140c3177a0330f42411618c3fca28930ea02a21) samba's ctdb/tools/statd_callout_helper I find this comment:
notify)
...
# we need these settings to make sure that no tcp connections survive
# across a very fast failover/failback
#echo 10 > /proc/sys/net/ipv4/tcp_fin_timeout
#echo 0 > /proc/sys/net/ipv4/tcp_max_tw_buckets
#echo 0 >
2024 Dec 06
0
[Bug 396] sshd orphans processes when no pty allocated
https://bugzilla.mindrot.org/show_bug.cgi?id=396
Ulrich Sibiller <uli42 at gmx.de> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|RESOLVED |REOPENED
Resolution|FIXED |---
--- Comment #24 from Ulrich Sibiller <uli42 at
2025 May 29
1
ctdb tcp kill: remaining connections
Hi Uli,
On Wed, 28 May 2025 13:12:29 +0000, Ulrich Sibiller
<ulrich.sibiller at eviden.com> wrote:
> We are exporting GPFS filesystems with NFSv3 via ctdb. Today I have
> stopped ctdb on one node, and the IPs got automatically moved to
> another node. This is something that always works like a charm.
> However, many NFS clients started complaining very soon, a phenomenon
>
2024 Oct 16
1
ctdb tcp settings for statd failover
Hi Ulrich,
On Tue, 15 Oct 2024 15:22:51 +0000, Ulrich Sibiller via samba
<samba at lists.samba.org> wrote:
> In current (6140c3177a0330f42411618c3fca28930ea02a21) samba's
> ctdb/tools/statd_callout_helper I find this comment:
>
> notify)
> ...
> # we need these settings to make sure that no tcp connections
> survive # across a very fast failover/failback
2023 Feb 16
1
ctdb tcp kill: remaining connections
Martin Schwenke schrieb am 15.02.2023 23:23:
> Hi Uli,
>
> [Sorry for slow response, life is busy...]
Thanks for answering anyway!
> On Mon, 13 Feb 2023 15:06:26 +0000, Ulrich Sibiller via samba
> OK, this part looks kind-of good. It would be interesting to know how
> long the entire failover process is taking.
What exactly would you define as the begin and end of the
2023 Feb 15
1
ctdb tcp kill: remaining connections
Hi Uli,
[Sorry for slow response, life is busy...]
On Mon, 13 Feb 2023 15:06:26 +0000, Ulrich Sibiller via samba
<samba at lists.samba.org> wrote:
> we are using ctdb 4.15.5 on RHEL8 (Kernel
> 4.18.0-372.32.1.el8_6.x86_64) to provide NFS v3 (via tcp) to RHEL7/8
> clients. Whenever an ip takeover happens most clients report
> something like this:
> [Mon Feb 13 12:21:22
2019 Apr 30
1
[Bug 13920] New: --max-delete and dirs being replaced by symlinks on source
https://bugzilla.samba.org/show_bug.cgi?id=13920
Bug ID: 13920
Summary: --max-delete and dirs being replaced by symlinks on
source
Product: rsync
Version: 3.1.2
Hardware: All
OS: All
Status: NEW
Severity: normal
Priority: P5
Component: core
Assignee: wayne
2024 Dec 06
0
[Bug 396] sshd orphans processes when no pty allocated
https://bugzilla.mindrot.org/show_bug.cgi?id=396
--- Comment #23 from Ulrich Sibiller <uli42 at gmx.de> ---
I have not tested it but the name of the two options does not seem that
it is the exact fix for this problem as this happens when _killing_ the
connection, see Descripton. While a timeout might kill the dangling
processes it will probably do so after some _time_ not at the time the
2006 Nov 11
1
FW: [tclug-list] Drives Not Recognized on Dell Poweredge 1550 CentOs install
The raid card is an Adaptec 2100S.
-----Original Message-----
From: Josh Paetzel [mailto:josh at tcbug.org]
Sent: Saturday, November 11, 2006 9:35 AM
To: tclug-list at mn-linux.org; pjcrump at bitstream.net
Subject: Re: [tclug-list] Drives Not Recognized on Dell Poweredge 1550
CentOs install
On Saturday 11 November 2006 01:20, Phillip Crump wrote:
> I have a new (used)Dell Poweredge 1550
2023 Feb 13
1
ctdb tcp kill: remaining connections
Hello,
we are using ctdb 4.15.5 on RHEL8 (Kernel 4.18.0-372.32.1.el8_6.x86_64) to provide NFS v3 (via tcp) to RHEL7/8 clients. Whenever an ip takeover happens most clients report something like this:
[Mon Feb 13 12:21:22 2023] nfs: server x.x.253.252 not responding, still trying
[Mon Feb 13 12:21:28 2023] nfs: server x.x.253.252 not responding, still trying
[Mon Feb 13 12:22:31 2023] nfs: server
2014 Feb 24
5
[Bug 2205] New: -S does hostname lookup although it is unused
https://bugzilla.mindrot.org/show_bug.cgi?id=2205
Bug ID: 2205
Summary: -S does hostname lookup although it is unused
Product: Portable OpenSSH
Version: 6.5p1
Hardware: amd64
OS: Linux
Status: NEW
Severity: normal
Priority: P5
Component: ssh
Assignee: unassigned-bugs at
2023 Feb 16
1
ctdb tcp kill: remaining connections
On Thu, 16 Feb 2023 17:30:37 +0000, Ulrich Sibiller
<ulrich.sibiller at atos.net> wrote:
> Martin Schwenke schrieb am 15.02.2023 23:23:
> > OK, this part looks kind-of good. It would be interesting to know how
> > long the entire failover process is taking.
>
> What exactly would you define as the begin and end of the failover?
From "Takeover run