similar to: Tinc systemd dependencies ?

Displaying 20 results from an estimated 6000 matches similar to: "Tinc systemd dependencies ?"

2018 Jan 19
0
Error: Corrupted dbox file
Hello Florent, How did you proceed with the upgrade? Did you follow the recommended steps guide to upgrade ceph? (mons first, then OSDs, then MDS) Did you interrupt dovecot before upgrading the MDS specially? Did you remount the filesystem? Did you upgrade the ceph client too? Give people the complete scene and someone might be able to help you. Ask on ceph-users list too. Regards, Webert
2015 Dec 07
0
Tinc & moving VMs accross network
On Mon, Dec 7, 2015 at 7:20 PM, Florent B <florent at coppint.com> wrote: > Hi everyone, > > I already posted about this issue, but I can't find the old thread. > > I have a cluster of 5 nodes, running Proxmox 4, and Tinc as "virtual > switch" for my nodes : on each node, a bridge "vmbr1" where Tinc is > connected, provides me a secured network
2016 Aug 23
0
Cannot open config file /etc/tinc/XXX/hosts/YYYY: No such file or directory
why dont you place a service dependency on your tinc service init/systemd files to depend on your fuse/mount service? On Tuesday, 23 August 2016, Florent B <florent at coppint.com> wrote: > Hi everyone, > > I have a special setup were hosts files of Tinc are stored in a > directory mounted by fuse and shared across my hosts (Proxmox /etc/pve). > > On boot, sometimes Tinc
2018 Aug 27
0
Disable encryption with Tinc 1.1
Try to disable ExperimentalProtocol. Florent B <florent at coppint.com> 于2018年8月10日周五 下午9:16写道: > > Hi, > > Is it possible to completely disable encryption with Tinc 1.1 ? > > I set in my configuration : > > ExperimentalProtocol = no > Cipher = none > Digest = none > > But it does not seem to disable encryption (same performance). > > Is it possible
2015 Dec 07
2
Tinc & moving VMs accross network
On 7 December 2015 at 17:20, Florent B <florent at coppint.com> wrote: > I have a cluster of 5 nodes, running Proxmox 4, and Tinc as "virtual > switch" for my nodes : on each node, a bridge "vmbr1" where Tinc is > connected, provides me a secured network for my VMs (connected to that > bridge). > > When I move (hot move) a VM from a host to another, I
2015 Mar 27
1
Option to not add "Received" header ?
You could remove them with sieve in the latest version of pigeonhole. On Mar 24, 2015 7:33 AM, Florent B <florent at coppint.com> wrote: > > I know about RFC's, but that could be an option, not enabled by default.
2017 Dec 27
1
Package repository now available
Thank you for your report, we'll look into it! Aki > On December 27, 2017 at 8:16 PM Florent B <florent at coppint.com> wrote: > > > Hi, > > This repository does not work with Aptly. > It seems "architecture" line is wrong in InRelease file (needs to be > "Architectures:" instead of "Architecture:"). > And
2017 Oct 20
0
HTTPS for http://xi.dovecot.fi/debian/
> On October 20, 2017 at 12:37 PM Florent B <florent at coppint.com> wrote: > > > Hi, > > We use Dovecot packages from http://xi.dovecot.fi/debian/. > > Could it be possible to serve it with HTTPS ? > > Thank you. > > Florent Hi! It has now https enabled with valid certificate. Aki
2015 Dec 22
0
Sending packet from hostX to hostY via hostY
It's not. In fact it's explicitly telling you that it's *not* forwarding. On 22 December 2015 at 15:55, Florent B <florent at coppint.com> wrote: > Hi, > > I have a lot of messages like this in my Tinc 1.1-git log : > > Sending packet from host5 (MYSELF port 655) to host4 (192.168.0.4 port > 655) via host4 (192.168.0.4 port 655) (UDP) > > Is it expected ?
2015 Dec 22
0
Invalid packet seqno: 58073 != 0 from host5
This smells like https://github.com/gsliepen/tinc/pull/104 - are you sure you're using latest HEAD? How often does this occur? Does it correlate with other events such as nodes joining or leaving? Does it occur more often if you reduce the value of KeyExpire? On 22 December 2015 at 16:10, Florent B <florent at coppint.com> wrote: > Hi, > > With latest Tinc 1.1 git code, I have
2011 Oct 09
1
Btrfs High IO-Wait
Hi, I have high IO-Wait on the ods (ceph), the osd are running a v3.1-rc9 kernel. I also experience high IO-rates, around 500IO/s reported via iostat. Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 6.80 0.00 62.40 18.35 0.04 5.29 0.00 5.29 5.29 3.60 sdb
2014 Feb 25
3
PMTU = 1518 over local network at 1500 MTU
Hi all, I have two nodes, connected to a switch, using Tinc 1.1 from git. They connect each other with sptps, and to other nodes in the Internet with old protocol because they have Tinc 1.0. There is no problem with remote nodes, but between my 2 local nodes, they see 1518 PMTU. But local network is 1500 MTU !!! So nodes can ping each other but larger data does not go. test1=sllm1 test2=sllm2
2011 Dec 09
1
Error: Corrupted index cache file /xxx/yyy/zzz/indexes/.INBOX/dovecot.index.cache: invalid record size
Hi all, I got a problem with a Dovecot IMAP/POP installation. Since a recent failure of our distributed file system (no loss of data btw), Dovecot seems to have a problem with index cache files. For a lot of accounts, I have this error in logs: Error: Corrupted index cache file /xxx/yyy/zzz/indexes/.INBOX/dovecot.index.cache: invalid record size If I delete all files from /indexes/
2012 Aug 30
5
Ceph + RBD + Xen: Complete collapse -> Network issue in domU / Bad data for OSD / OOM Kill
Hi, A bit of explanation of what I''m trying to achieve : We have a bunch of homogeneous nodes that have CPU + RAM + Storage and we want to use that as some generic cluster. The idea is to have Xen on all of these and run Ceph OSD in a domU on each to "export" the local storage space to the entire cluster. And then use RBD to store / access VM images from any of the machines.
2013 Nov 06
0
[PATCH] Btrfs: fix lockdep error in async commit
Lockdep complains about btrfs''s async commit: [ 2372.462171] [ BUG: bad unlock balance detected! ] [ 2372.462191] 3.12.0+ #32 Tainted: G W [ 2372.462209] ------------------------------------- [ 2372.462228] ceph-osd/14048 is trying to release lock (sb_internal) at: [ 2372.462275] [<ffffffffa022cb10>] btrfs_commit_transaction_async+0x1b0/0x2a0 [btrfs] [ 2372.462305] but there
2017 Feb 10
0
Special use case : "diff" file
Going with just rsync you would have to maintain a local backup as well then use --write-batch to make a diff file to also upload to your offsite storage. Outside of rsync maybe rdiff-backup can do this easier? On 02/10/2017 11:20 AM, Florent B wrote: > Hi everyone, > > Sorry if I don't use the right words, but I don't know how to call what > I need, and I don't know if
2014 Nov 07
4
[Bug 10925] New: non-atomic xattr replacement in btrfs => rsync --read-batch random errors
https://bugzilla.samba.org/show_bug.cgi?id=10925 Bug ID: 10925 Summary: non-atomic xattr replacement in btrfs => rsync --read-batch random errors Product: rsync Version: 3.1.0 Hardware: All URL: http://article.gmane.org/gmane.comp.file-systems.btrfs /40013 OS: All
2011 Oct 26
1
Re: ceph on btrfs [was Re: ceph on non-btrfs file systems]
2011/10/26 Sage Weil <sage@newdream.net>: > On Wed, 26 Oct 2011, Christian Brunner wrote: >> >> > Christian, have you tweaked those settings in your ceph.conf?  It would be >> >> > something like ''journal dio = false''.  If not, can you verify that >> >> > directio shows true when the journal is initialized from your osd log?
2023 Dec 14
2
Gluster -> Ceph
Big raid isn't great as bricks. If the array does fail, the larger brick means much longer heal times. My main question I ask when evaluating storage solutions is, "what happens when it fails?" With ceph, if the placement database is corrupted, all your data is lost (happened to my employer, once, losing 5PB of customer data). With Gluster, it's just files on disks, easily
2018 Mar 29
2
What commands there are in Tinc-VPN v1.0 to show information about the VPN?
Hi everyone, I need know if there is any command that show the hosts connected, networks, etc... in a Tinc-VPN v1.0 how in Tinc-VPN v1.1 Regards, Ramses