Displaying 20 results from an estimated 59 matches for "failbacks".
Did you mean:
failback
2010 Jul 28
1
remus - failback?
does remus provide failback mechanism?
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
2017 Aug 14
6
Failback mailboxes?
Hi!
Have been using Fedora as my dovecot server for
some time and am struggling with systemd
at every update.
Fedora insists on setting
ProtectSystem=full in both dovecot.service and postfix.service
at every update of the packages.
This makes my mailstore which is in /usr/local/var/mail
Read-only.
And this makes the incoming emails delivered through
dovecot-lda disappear into /dev/null until I
2017 Aug 14
0
Failback mailboxes?
14.08.2017 09:24 Dag Nygren kirjutas:
>
> Hi!
>
> Have been using Fedora as my dovecot server for
> some time and am struggling with systemd
> at every update.
> Fedora insists on setting
> ProtectSystem=full in both dovecot.service and postfix.service
> at every update of the packages.
>
> This makes my mailstore which is in /usr/local/var/mail
> Read-only.
2017 Aug 16
0
Failback mailboxes?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On Wed, 16 Aug 2017, Matt Bryant wrote:
> hmm if message cannot be written to disk surely it remains on mda queue
> as not delviered and does not just disappear ? or am i reading this
> wrong ?!
as Matt writes your MDA (aka dovecot-lda) returns with an exit code != 0
and your MTA should queue the message for later re-delivery.
IMHO, you
2017 Aug 15
3
Failback mailboxes?
hmm if message cannot be written to disk surely it remains on mda queue
as not delviered and does not just disappear ? or am i reading this
wrong ?!
> Dag Nygren <mailto:dag at newtech.fi>
> 16 August 2017 at 7:14 am
> Thanks for all the advice on how to configure systemd
> not to loose my emails after every update. Much appreciated.
>
> But there could be other reasons
2017 Aug 14
1
Failback mailboxes?
On Monday 14 August 2017 10:22:54 Sander Lepik wrote:
> 14.08.2017 09:24 Dag Nygren kirjutas:
> > PS! I really hate systemd - Destroys the UNIX way of
> > doing things with a heavy axe....
>
> Don't hate it, better learn to use it:
> https://wiki.archlinux.org/index.php/systemd#Drop-in_files
Cannot find a way to "remove"
the ProtectSystem setting as there
2017 Aug 15
0
Failback mailboxes?
Thanks for all the advice on how to configure systemd
not to loose my emails after every update. Much appreciated.
But there could be other reasons for the mailboxes not being
writable and what I am really asking for is for
dovecot-lda not to loose the incoming emails into thin air
in these cases.
Could we have some kind of collective place/places where they would
be saved in this case and then
2019 Feb 25
2
glusterfs + ctdb + nfs-ganesha , unplug the network cable of serving node, takes around ~20 mins for IO to resume
Hi all
We did some failover/failback tests on 2 nodes��A and B�� with architecture 'glusterfs + ctdb(public address) + nfs-ganesha'��
1st:
During write, unplug the network cable of serving node A
->NFS Client took a few seconds to recover to conitinue writing.
After some minutes, plug the network cable of serving node A
->NFS Client also took a few seconds to recover
2024 Oct 15
1
ctdb tcp settings for statd failover
Hi,
In current (6140c3177a0330f42411618c3fca28930ea02a21) samba's ctdb/tools/statd_callout_helper I find this comment:
notify)
...
# we need these settings to make sure that no tcp connections survive
# across a very fast failover/failback
#echo 10 > /proc/sys/net/ipv4/tcp_fin_timeout
#echo 0 > /proc/sys/net/ipv4/tcp_max_tw_buckets
#echo 0 >
2024 Oct 16
1
ctdb tcp settings for statd failover
Hi Ulrich,
On Tue, 15 Oct 2024 15:22:51 +0000, Ulrich Sibiller via samba
<samba at lists.samba.org> wrote:
> In current (6140c3177a0330f42411618c3fca28930ea02a21) samba's
> ctdb/tools/statd_callout_helper I find this comment:
>
> notify)
> ...
> # we need these settings to make sure that no tcp connections
> survive # across a very fast failover/failback
2005 Jun 20
2
Fallback
Thanks for the answer. How can I do a "failback" for a relayed stream ?
The idea is to set up a relay that can use for the same mountpoint two
connection.
Karl Heyes a ?crit :
>On Mon, 2005-06-20 at 13:34, EISELE Pascal wrote:
>
>
>>Hi,
>>
>>I'm trying the following settings but it seams that it's not working :(
>>While I try to switch down
2010 Jun 21
0
Seriously degraded SAS multipathing performance
I''m seeing seriously degraded performance with round-robin SAS
multipathing. I''m hoping you guys can help me achieve full throughput
across both paths.
My System Config:
OpenSolaris snv_134
2 x E5520 2.4 GHz Xeon Quad-Core Processors
48 GB RAM
2 x LSI SAS 9200-8e (eight-port external 6Gb/s SATA and SAS PCIe 2.0 HBA)
1 X Mellanox 40 Gb/s dual port card PCIe 2.0
1 x JBOD:
2018 Nov 21
2
relay backup file
Hi,
I have install 2 Icecast servers.
We not using master relay. But we use specific mountrelay for fallback:
server1 - pc1 encoder over dsl
server2 - pc2 encoder over cable internet
We use the same mountpoints on both icecast servers and use the failback
to a relay mountpoint from the other server.
Works fine when we shutdown pc1 or pc2 there is on both icecast server
music.
We like to have
2007 Feb 17
8
ZFS with SAN Disks and mutipathing
Hi,
I just deploy the ZFS on an SAN attach disk array and it''s working fine.
How do i get dual pathing advantage of the disk ( like DMP in Veritas).
Can someone point to correct doc and setup.
Thanks in Advance.
Rgds
Vikash Gupta
This message posted from opensolaris.org
2016 Nov 10
1
CTDB IP takeover/failover tunables - do you use them?
...will always be hosted by node Y.
The cost of using deterministic IP address assignment is that it
disables part of the logic where ctdb tries to reduce the number of
public IP assignment changes in the cluster. This tunable may increase
the number of IP failover/failbacks that are performed on the cluster
by a small margin.
LCP2PublicIPs
Default: 1
When set to 1, ctdb uses the LCP2 ip allocation algorithm.
I plan to replace these with a single tunable to select the algorithm
(0 = deterministic, 1 = non-deterministic, 2 = LCP2 (default))....
2009 Sep 17
1
multipath using defaults rather than multipath.conf contents for some devices (?) - why ?
hi all
We have a rh linux server connected to two HP SAN controllers, one an HSV200 (on the way out),
the other an HSV400 (on the way in). (Via a Qlogic HBA).
/etc/multipath.conf contains this :
device
{
vendor "(COMPAQ|HP)"
product "HSV1[01]1|HSV2[01]0|HSV300|HSV4[05]0"
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
2019 Mar 04
0
glusterfs + ctdb + nfs-ganesha , unplug the network cable of serving node, takes around ~20 mins for IO to resume
Hi Dan,
On Mon, 25 Feb 2019 02:43:31 +0000, "Liu, Dan via samba"
<samba at lists.samba.org> wrote:
> We did some failover/failback tests on 2 nodes(A and B) with
> architecture 'glusterfs + ctdb(public address) + nfs-ganesha'。
>
> 1st:
> During write, unplug the network cable of serving node A
> ->NFS Client took a few seconds to recover to conitinue
2017 Jan 31
1
multipath show config different in CentOS 7?
Hello,
suppose I want to use a special configuration for my IBM/1814 storage array
luns, then I put something like this in multipath.conf
devices {
device {
vendor "IBM"
product "^1814"
product_blacklist "Universal Xport"
path_grouping_policy "group_by_prio"
path_checker
2019 Mar 31
1
mountpoint configuration beyond the basics?
Hello,
I'm new to icecast, but in few time I setup a streaming system using a
raspberry pi as stream source through darkice towards my icecast server.
All seemed fine with the basic mountpoint configuration, but when I
tried to try some of the more "advanced" features I failed badly.
More specifically:
1. I tried to force a failback mountpoint like this:
2006 Apr 13
1
device-mapper multipath
I am attempting to get multipath working with device-mapper (CentOS
4.2 and 4.3). It works on EVERY install of mine from RH (also v4.2,
4.3), but the same multipath.conf imported to all my installs of
CentOS do not work. Note that I have tested a working 4.2
configuration file from RH on CentOS 4.2 and a working 4.3
configuration (it changed slightly) on CentOS 4.3. Neither worked. Our
production