similar to: mountpoint configuration beyond the basics?

Displaying 20 results from an estimated 900 matches similar to: "mountpoint configuration beyond the basics?"

2009 Nov 18
1
Move listeners problems
Thanks, Karl! I've checked the log files during a move command but I couldn't find anything that looked related to the problems. I've thought of upgrading to 2.3.2 but thought I should ask first if it was something common - saw that someone mailed about the same problems a couple of months ago, if I understood it correctly. /Mathias 2009/11/18 Karl Heyes <karl at xiph.org>:
2024 Oct 16
1
ctdb tcp settings for statd failover
Hi Ulrich, On Tue, 15 Oct 2024 15:22:51 +0000, Ulrich Sibiller via samba <samba at lists.samba.org> wrote: > In current (6140c3177a0330f42411618c3fca28930ea02a21) samba's > ctdb/tools/statd_callout_helper I find this comment: > > notify) > ... > # we need these settings to make sure that no tcp connections > survive # across a very fast failover/failback
2005 Sep 12
4
Icecast 2.3 RC3 Announcement..
Thanks to everyone who has been helping us test... your input is certainly appreciated. We've fixed a few bugs and are release RC3 as of now. Here's what was fixed : - log username to access log (bug #706) if available. - fix segv case on listmounts/moveclients when a fallback to file stream is running - Patch from martin@matuska.org: don't treat all clients as duplicates. Links
2010 Jun 21
0
Seriously degraded SAS multipathing performance
I''m seeing seriously degraded performance with round-robin SAS multipathing. I''m hoping you guys can help me achieve full throughput across both paths. My System Config: OpenSolaris snv_134 2 x E5520 2.4 GHz Xeon Quad-Core Processors 48 GB RAM 2 x LSI SAS 9200-8e (eight-port external 6Gb/s SATA and SAS PCIe 2.0 HBA) 1 X Mellanox 40 Gb/s dual port card PCIe 2.0 1 x JBOD:
2010 Jul 28
1
remus - failback?
does remus provide failback mechanism? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2019 Mar 04
0
glusterfs + ctdb + nfs-ganesha , unplug the network cable of serving node, takes around ~20 mins for IO to resume
Hi Dan, On Mon, 25 Feb 2019 02:43:31 +0000, "Liu, Dan via samba" <samba at lists.samba.org> wrote: > We did some failover/failback tests on 2 nodes(A and B) with > architecture 'glusterfs + ctdb(public address) + nfs-ganesha'。 > > 1st: > During write, unplug the network cable of serving node A > ->NFS Client took a few seconds to recover to conitinue
2017 Aug 14
0
Failback mailboxes?
14.08.2017 09:24 Dag Nygren kirjutas: > > Hi! > > Have been using Fedora as my dovecot server for > some time and am struggling with systemd > at every update. > Fedora insists on setting > ProtectSystem=full in both dovecot.service and postfix.service > at every update of the packages. > > This makes my mailstore which is in /usr/local/var/mail > Read-only.
2017 Aug 16
0
Failback mailboxes?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wed, 16 Aug 2017, Matt Bryant wrote: > hmm if message cannot be written to disk surely it remains on mda queue > as not delviered and does not just disappear ? or am i reading this > wrong ?! as Matt writes your MDA (aka dovecot-lda) returns with an exit code != 0 and your MTA should queue the message for later re-delivery. IMHO, you
2016 Nov 10
1
CTDB IP takeover/failover tunables - do you use them?
I'm currently hacking on CTDB's IP takeover/failover code. For Samba 4.6, I would like to rationalise the IP takeover-related tunable parameters. I would like to know if there are any users who set the values of these tunables to non-default values. The tunables in question are: DisableIPFailover Default: 0 When set to non-zero, ctdb will not perform failover or
2019 Feb 25
2
glusterfs + ctdb + nfs-ganesha , unplug the network cable of serving node, takes around ~20 mins for IO to resume
Hi all We did some failover/failback tests on 2 nodes��A and B�� with architecture 'glusterfs + ctdb(public address) + nfs-ganesha'�� 1st: During write, unplug the network cable of serving node A ->NFS Client took a few seconds to recover to conitinue writing. After some minutes, plug the network cable of serving node A ->NFS Client also took a few seconds to recover
2017 Aug 15
3
Failback mailboxes?
hmm if message cannot be written to disk surely it remains on mda queue as not delviered and does not just disappear ? or am i reading this wrong ?! > Dag Nygren <mailto:dag at newtech.fi> > 16 August 2017 at 7:14 am > Thanks for all the advice on how to configure systemd > not to loose my emails after every update. Much appreciated. > > But there could be other reasons
2009 Sep 17
1
multipath using defaults rather than multipath.conf contents for some devices (?) - why ?
hi all We have a rh linux server connected to two HP SAN controllers, one an HSV200 (on the way out), the other an HSV400 (on the way in). (Via a Qlogic HBA). /etc/multipath.conf contains this : device { vendor "(COMPAQ|HP)" product "HSV1[01]1|HSV2[01]0|HSV300|HSV4[05]0" getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
2005 Jun 20
2
Fallback
Thanks for the answer. How can I do a "failback" for a relayed stream ? The idea is to set up a relay that can use for the same mountpoint two connection. Karl Heyes a ?crit : >On Mon, 2005-06-20 at 13:34, EISELE Pascal wrote: > > >>Hi, >> >>I'm trying the following settings but it seams that it's not working :( >>While I try to switch down
2017 Aug 14
6
Failback mailboxes?
Hi! Have been using Fedora as my dovecot server for some time and am struggling with systemd at every update. Fedora insists on setting ProtectSystem=full in both dovecot.service and postfix.service at every update of the packages. This makes my mailstore which is in /usr/local/var/mail Read-only. And this makes the incoming emails delivered through dovecot-lda disappear into /dev/null until I
2002 Oct 03
2
icecast2 connect / reconnect
Hi, I have the following problem with icecast2: I send a stream for some time. Then the source disconnects (maybe just by abruptly dropping the network connection), and I reconnect immediately. My problem is, that these reconnects don't succeed regulary (say, 20-30% of the time). The same setup worked well with icecast 1.x. <p>Akos --- >8 ---- List archives:
2008 Mar 04
0
Device-mapper-multipath not working correctly with GNBD devices
Hi all, I am trying to configure a failover multipath between 2 GNBD devices. I have a 4 nodes Redhat Cluster Suite (RCS) cluster. 3 of them are used for running services, 1 of them for central storage. In the future I am going to introduce another machine for central storage. The 2 storage machine are going to share/export the same disk. The idea is not to have a single point of failure
2024 Oct 15
1
ctdb tcp settings for statd failover
Hi, In current (6140c3177a0330f42411618c3fca28930ea02a21) samba's ctdb/tools/statd_callout_helper I find this comment: notify) ... # we need these settings to make sure that no tcp connections survive # across a very fast failover/failback #echo 10 > /proc/sys/net/ipv4/tcp_fin_timeout #echo 0 > /proc/sys/net/ipv4/tcp_max_tw_buckets #echo 0 >
2018 Nov 21
2
relay backup file
Hi, I have install 2 Icecast servers. We not using master relay. But we use specific mountrelay for fallback: server1 - pc1 encoder over dsl server2 - pc2 encoder over cable internet We use the same mountpoints on both icecast servers and use the failback to a relay mountpoint from the other server. Works fine when we shutdown pc1 or pc2 there is on both icecast server music. We like to have
2006 Sep 13
1
URL authentication
'The work' are you referring to http://svn.xiph.org/icecast/branches/kh ? Most important is that I shouldn't have to edit the icecast configuration file when a new mountpoint needs to be added. Therefor all authentication of source and listener should go through the URL's. KJ Karl Heyes schreef: > Klaas Jan Wierenga wrote: >> >> In my case that's not needed,
2009 Sep 27
0
SUMMARY : multipath using defaults rather than multipath.conf contents for some devices (?) - why ?
The reason for the behaviour observed below turned out to be that the device entry in /etc/multipath.conf was inadvertently appended *after* the devices section , rather than inside it - so that we had #devices { # device { # blah blah # } (file has a bunch of defaults commented out) # etc #} # # device { our settings } *rather than*