similar to: Trouble with rsync inside fcron: buffer overflow in recv_exclude_list

Displaying 20 results from an estimated 200 matches similar to: "Trouble with rsync inside fcron: buffer overflow in recv_exclude_list"

2012 Jan 07
0
fcron scheduler
Hi, Is fcron scheduler available in CentOS 5.6 Server ? I did yum search fcron and it does not return anything. As per http://fcron.free.fr/description.php Fcron is a scheduler. It *aims at replacing Vixie Cron*, so it implements most of its functionalities. But contrary to Vixie Cron, fcron *does not need your system to be up 7 days a week, 24 hours a day* : it also works well with systems
2012 Nov 21
1
Rsync --update copy one file multiples times
Hi All, My using rsync command to backup my mail server mailbox to backup server. I am using following command for backup. rsync -avzu -e ssh root at 192.168.1.12:/mail_home /mail_backup I configured daily cron for this command in backup server. In mail server /mail_home folder having all user's mailbox and size of mail_home is 345 GB and in backup server I have assign 450 GB for
2005 Jan 05
2
buffer overflow in recv_exclude_list using rsync under windows?
Hi, First post to the list, so please feel free to set me straight if I'm not following some protocol or other :o) We need to use rsync to send files to a client, being a Windows user, we decided to try both the cwrsync implementation and also a straight cygwin/rsync install. I'm experiencing the following errors: using cygwin implementation: $ sh clientupload.sh
2014 Apr 04
1
dsync deleted my mailbox - what did I do wrong?
Hi. Mostly annoying: I migrated from one machine to another, made sure the target host worked as expected, updated mx records and - after a couple of days - signed it off as good. This is just my private machine, no big deal if something goes wrong.. Everything's fine? Good, let's migrate my inbox from the old machine. There's no direct connectivity between those servers, so what
2007 Feb 18
4
sysutils/fusefs-ntfs working for anyone?
Hi there, I've been trying to mount my NTFS partitions with the NTFS-3g project's FUSE implementation but am unable to mount anything. I'm on 6-STABLE and have the latest versions of FUSE installed: fusefs-kmod-0.3.0_4 Kernel module for fuse fusefs-libs-2.6.2 FUSE allows filesystem implementation in userspace fusefs-ntfs-0.20070207RC1 Mount NTFS partitions and disk images I use
2015 Jan 17
1
Re: Guests using more ram than specified
On 16.01.2015 15:14, Michal Privoznik wrote: > On 16.01.2015 13:33, Dennis Jacobfeuerborn wrote: >> Hi, >> today I noticed that one of my HVs started swapping aggressively and >> noticed that the two guests running on it use quite a bit more ram than >> I assigned to them. They respectively were assigned 124G and 60G with >> the idea that the 192G system then has
2015 Jan 16
2
Guests using more ram than specified
Hi, today I noticed that one of my HVs started swapping aggressively and noticed that the two guests running on it use quite a bit more ram than I assigned to them. They respectively were assigned 124G and 60G with the idea that the 192G system then has 8G for other purposes. In top I see the VMs using about 128G and 64G which means there is nothing left for the system. This is on a CentOS 7
2023 Mar 30
2
Performance: lots of small files, hdd, nvme etc.
Hello there, as Strahil suggested a separate thread might be better. current state: - servers with 10TB hdds - 2 hdds build up a sw raid1 - each raid1 is a brick - so 5 bricks per server - Volume info (complete below): Volume Name: workdata Type: Distributed-Replicate Number of Bricks: 5 x 3 = 15 Bricks: Brick1: gls1:/gluster/md3/workdata Brick2: gls2:/gluster/md3/workdata Brick3:
2024 Jan 18
1
Upgrade 10.4 -> 11.1 making problems
Good morning, heal still not running. Pending heals now sum up to 60K per brick. Heal was starting instantly e.g. after server reboot with version 10.4, but doesn't with version 11. What could be wrong? I only see these errors on one of the "good" servers in glustershd.log: [2024-01-18 06:08:57.328480 +0000] W [MSGID: 114031] [client-rpc-fops_v2.c:2561:client4_0_lookup_cbk]
2024 Jan 17
2
Upgrade 10.4 -> 11.1 making problems
ok, finally managed to get all servers, volumes etc runnung, but took a couple of restarts, cksum checks etc. One problem: a volume doesn't heal automatically or doesn't heal at all. gluster volume status Status of volume: workdata Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------
2024 Jan 18
1
Upgrade 10.4 -> 11.1 making problems
were you able to solve the problem? Can it be treated like a "normal" split brain? 'gluster peer status' and 'gluster volume status' are ok, so kinda looks like "pseudo"... hubert Am Do., 18. Jan. 2024 um 08:28 Uhr schrieb Diego Zuccato <diego.zuccato at unibo.it>: > > That's the same kind of errors I keep seeing on my 2 clusters, >
2024 Jan 18
2
Upgrade 10.4 -> 11.1 making problems
That's the same kind of errors I keep seeing on my 2 clusters, regenerated some months ago. Seems a pseudo-split-brain that should be impossible on a replica 3 cluster but keeps happening. Sadly going to ditch Gluster ASAP. Diego Il 18/01/2024 07:11, Hu Bert ha scritto: > Good morning, > heal still not running. Pending heals now sum up to 60K per brick. > Heal was starting
2007 May 04
3
NFS issue
Hi List, I must be going mad or something, but got a really odd problem with NFS mount and a DVD rom. Here is the situation, /dev/md7 58G 18G 37G 33% /data which is shared out by NFS, (/etc/exportfs) This has been working since I installed the OS, Centos 4.4 I have a DVD on that is device /dev/scd0, which I can mount anywhere I like, no problem. However, the problem
2024 Jan 18
1
Upgrade 10.4 -> 11.1 making problems
Since glusterd does not consider it a split brain, you can't solve it with standard split brain tools. I've found no way to resolve it except by manually handling one file at a time: completely unmanageable with thousands of files and having to juggle between actual path on brick and metadata files! Previously I "fixed" it by: 1) moving all the data from the volume to a temp
2007 Mar 06
1
blocks 256k chunks on RAID 1
Hi, I have a RAID 1 (using mdadm) on CentOS Linux and in /proc/mdstat I see this: md7 : active raid1 sda2[0] sdb2[1] 26627648 blocks [2/2] [UU] [-->> it's OK] md1 : active raid1 sdb3[1] sda3[0] 4192896 blocks [2/2] [UU] [-->> it's OK] md2 : active raid1 sda5[0] sdb5[1] 4192832 blocks [2/2] [UU] [-->> it's OK] md3 : active raid1 sdb6[1] sda6[0] 4192832 blocks [2/2]
2015 Jan 16
0
Re: Guests using more ram than specified
On 16.01.2015 13:33, Dennis Jacobfeuerborn wrote: > Hi, > today I noticed that one of my HVs started swapping aggressively and > noticed that the two guests running on it use quite a bit more ram than > I assigned to them. They respectively were assigned 124G and 60G with > the idea that the 192G system then has 8G for other purposes. In top I > see the VMs using about 128G and
2024 Jan 19
1
Upgrade 10.4 -> 11.1 making problems
Hi Strahil, hm, don't get me wrong, it may sound a bit stupid, but... where do i set the log level? Using debian... https://access.redhat.com/documentation/de-de/red_hat_gluster_storage/3/html/administration_guide/configuring_the_log_level ls /etc/glusterfs/ eventsconfig.json glusterfs-georep-logrotate gluster-rsyslog-5.8.conf group-db-workload group-gluster-block group-nl-cache
2024 Jan 18
2
Upgrade 10.4 -> 11.1 making problems
Are you able to set the logs to debug level ?It might provide a clue what it is going on. Best Regards,Strahil Nikolov On Thu, Jan 18, 2024 at 13:08, Diego Zuccato<diego.zuccato at unibo.it> wrote: That's the same kind of errors I keep seeing on my 2 clusters, regenerated some months ago. Seems a pseudo-split-brain that should be impossible on a replica 3 cluster but keeps
2024 Jan 19
1
Upgrade 10.4 -> 11.1 making problems
gluster volume set testvol diagnostics.brick-log-level WARNING gluster volume set testvol diagnostics.brick-sys-log-level WARNING gluster volume set testvol diagnostics.client-log-level ERROR gluster --log-level=ERROR volume status --- Gilberto Nunes Ferreira Em sex., 19 de jan. de 2024 ?s 05:49, Hu Bert <revirii at googlemail.com> escreveu: > Hi Strahil, > hm, don't get me
2010 Nov 04
1
orphan inodes deleted issue
Dear All, My servers running on CentOS 5.5 x86_64 with kernel 2.6.18.194.17.4.el gigabyte motherboard and 2 harddisks (seagate 500GB). My CentOS box configured RAID 1, yesterday and today I had the same problem on 2 servers with same configuration. See the following error messages for details: EXT3-fs: INFO: recovery required on readonly filesystem. EXT3-fs: write access will be enabled during