Displaying 20 results from an estimated 56 matches for "syncer".
Did you mean:
synced
2013 Jul 17
1
syncer causing latency spikes
Hello,
I'm trying to investigate and solve some postgres latency spikes that
I'm seeing as a result of some behaviour in the syncer. This is with
FreeBSD 8.2 (with some local modifications and backports, r231160 in
particular). The system has an LSI 9261-8i RAID controller (backed by
mfi(4)) and the database and WALs are on separate volumes, a RAID 6 and
a RAID 1 respectively. It has about 96GB of RAM installed.
What's hap...
2009 Jul 22
3
DRBD very slow....
...ink with drbd is 11 MB/sec
(megabytes)
But if I copy a 1 gig file over that link I get 110 MB/sec.
Why is DRBD so slow?
I am not using drbd encryption because of the back to back link.
Here is a part of my drbd config:
# cat /etc/drbd.conf
global {
usage-count yes;
}
common {
protocol C;
syncer { rate 80M; }
net {
allow-two-primaries;
}
}
resource xenotrs {
device /dev/drbd6;
disk /dev/vg0/xenotrs;
meta-disk internal;
on baldur.somedomain.local {
address 10.99.99.1:7793;
}
on thor.somedomain.local {
address 10.99.99.2:7793;
}
}
Kind regards,
Co...
2003 Apr 11
2
no idle CPU ... system hogging it all ...
...0.00 0.00 0 0.00 7 0 93 0 0
8 33 15.13 8 0.11 0.00 0 0.00 0.00 0 0.00 11 0 89 0 0
3 20 15.65 33 0.51 0.00 0 0.00 0.00 0 0.00 4 0 96 0 0
doing a ps, the 'server processes' don't look to be consuming much CPU,
other then the vmdaemon/syncer (that is 18 and 9 hrs respectively, right?)
USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
root 0 0.0 0.0 0 0 ?? DLs Thu07AM 0:45.80 (swapper)
root 1 0.0 0.0 552 196 ?? SLs Thu07AM 0:03.37 /sbin/init --
root 2 0.0 0.0...
2006 Sep 28
2
Duplicate record weirdness ?
...Top marks Ezra, it''s gone a long way to
stopping us from doing a nasty curl/cron hack :)
Okay so here''s what we''re using it for.
We''re synchronising data from a Filemaker database, via web service equest,
into a mysql database via a rails model called ''Syncer'' [How''s that for an
oddball use case!]. Now this model exposes a static method which goes away
and does the business. When it starts synchronising it creates a log record
for it, and when it finishes it updates that log record. Fair enough. All is
working beautfiully and this is...
2017 Sep 29
1
Gluster geo replication volume is faulty
...r at gfs4::gfsvol_rep N/A Faulty N/A
N/A
Here is the output of the geo replication log file
[root at gfs1 ~]# tail -n 100 $(gluster volume geo-replication gfsvol
geo-rep-user at gfs4::gfsvol_rep config log-file)
[2017-09-29 15:53:29.785386] I [master(/gfs/brick2/gv0):1860:syncjob]
Syncer: Sync Time Taken duration=0.0357 num_files=1 job=3 return_code=12
[2017-09-29 15:53:29.785615] E [resource(/gfs/brick2/gv0):208:errlog]
Popen: command returned error cmd=rsync -aR0 --inplace --files-from=-
--super --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls
. -e ssh -oPasswor...
2009 Sep 29
1
Fax and dial-up connection issues
...les in 8191.664 system clock sample intervals (99.996%)
8192 samples in 8191.880 system clock sample intervals (99.999%)^C
--- Results after 7 passes ---
Best: 99.999 -- Worst: 99.906 -- Average: 99.970915, Difference: 100.021108
# xpp_sync:
svoip01:~# xpp_sync
Current sync: DAHDI
Best Available Syncers:
XBUS-01 (@usb-0000:00:1d.7-6) [usb:0000142] [ FXO*2 ]
XBUS-00 (@usb-0000:00:1d.7-5) [usb:1254] [ FXS*4 ]
XBUS-02 (@usb-0000:00:1d.7-3.1) [usb:X1036520] [ FXS*4 ]
XBUS-03 (@usb-0000:00:1d.7-3.2) [usb:X1036521] [ FXS*4 ]
=================================================...
2017 Oct 06
0
Gluster geo replication volume is faulty
...N/A? ? ? ? ? ? ? N/A
>
>
> Here is the output of the geo replication log file
> [root at gfs1 ~]# tail -n 100 $(gluster volume geo-replication gfsvol
> geo-rep-user at gfs4::gfsvol_rep config log-file)
> [2017-09-29 15:53:29.785386] I [master(/gfs/brick2/gv0):1860:syncjob]
> Syncer: Sync Time Takenduration=0.0357num_files=1job=3return_code=12
> [2017-09-29 15:53:29.785615] E [resource(/gfs/brick2/gv0):208:errlog]
> Popen: command returned errorcmd=rsync -aR0 --inplace --files-from=-
> --super --stats --numeric-ids --no-implied-dirs --existing --xattrs
> --acls...
2005 Dec 12
0
DRBD and XEN
...reboots a few seconds after.
I don''t know if the problem is xen or drbd related ? Unfornately I have
no traces on the xen node.
I set "ecbo 60 > /proc/sys/kernel/panic", but it doesn''t seem to be a
kernel panic, but a "xen" panic.
Please help.
-
-
-
syncer {
# Limit the bandwith used by the resynchronisation process.
# default unit is KB/sec; optional suffixes K,M,G are allowed
#
rate 4M;
# All devices in one group are resynchronized parallel.
# Resychronisation of groups is serialized in ascending order.
# Put DRBD resou...
2003 May 27
0
Odd 'hang' trying to ls a directory ...
...oo just hangs ...
According to ps axl, I have the following in WCHAN states:
neptune# ps axl | awk '{print $9}' | sort | uniq -c
6 -
1 WCHAN
207 accept
103 inode
2 kqread
412 lockf
68 nanslp
8 pause
1 piperd
13 poll
3 psleep
58 sbwait
1 sched
626 select
1 syncer
3 ttyin
1 vlruwt
8 wait
Thoughts? Does anything above look like its 'hanging' or 'locking' the file
system? A few minutes later, there are a few variations:
neptune# ps axl | awk '{print $9}' | sort | uniq -c
6 -
1 WCHAN
207 accept
129 inode
426 lockf...
2011 Feb 27
1
Recover botched drdb gfs2 setup .
...<rm/>
<totem consensus="4800" join="60" token="10000"
token_retransmits_before_loss_const="20"/>
</cluster>
[root at mcvpsam01 init.d]#
Drbd.conf
[root at mcvpsam01 init.d]# cat /etc/drbd.conf
resource r0 {
protocol C;
syncer { rate 1000M; }
startup {
wfc-timeout 120; # wait 2min for other peers
degr-wfc-timeout 120; # wait 2min if peer was already
# down before this node was
rebooted
become-primary-on both;
}
net {...
2003 Jul 29
6
kernel deadlock
...elect c037c1a0 inetd
122 dc35bf60 e0382000 0 1 122 000004 3 inode c34ab600 syslogd
99 dc35c100 e037e000 0 1 99 000084 3 wait dc35c100 dhclient
6 dc35c5e0 defd1000 0 0 0 000204 3 vlrup dc35c5e0 vnlru
5 dc35c780 defce000 0 0 0 000204 3 syncer c037c0c8 syncer
4 dc35c920 defcb000 0 0 0 000204 3 psleep c036487c
bufdaemon
3 dc35cac0 defc8000 0 0 0 000204 3 psleep c0372fc0 vmdaemon
2 dc35cc60 defc5000 0 0 0 000204 3 psleep c0351e58
pagedaemon
1 dc35ce00 dc361000 0 0 1 004284...
2009 Jul 28
2
DRBD on a xen host: crash on high I/O
...God FAQ, http://www.400monkeys.com/God/
-------------- next part --------------
#
# At most ONE global section is allowed.
# It must precede any resource section.
#
global {
# minor-count 64;
# dialog-refresh 5; # 5 seconds
# disable-ip-verification;
usage-count no;
}
common {
syncer { rate 50M; }
}
#
# this need not be r#, you may use phony resource names,
# like "resource web" or "resource mail", too
#
resource virtual1 {
protocol C;
handlers {
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
pri-lost-after-sb "...
2013 Sep 27
1
lock order reversal in 10-alpha2
...029a41fa, rsp = 0x7fffffffcf28, rbp =
0x7fffffffcf40 ---
messages.0:Sep 23 10:36:02 leader kernel: lock order reversal:
messages.0-Sep 23 10:36:02 leader kernel: 1st 0xfffff801ba9be240 zfs
(zfs) @ /usr/src/sys/kern/vfs_mount.c:1237
messages.0-Sep 23 10:36:02 leader kernel: 2nd 0xfffff801babab7c8 syncer
(syncer) @ /usr/src/sys/kern/vfs_subr.c:2210
messages.0-Sep 23 10:36:02 leader kernel: KDB: stack backtrace:
messages.0-Sep 23 10:36:02 leader kernel: db_trace_self_wrapper() at
db_trace_self_wrapper+0x2b/frame 0xfffffe02397ef460
messages.0-Sep 23 10:36:02 leader kernel: kdb_backtrace() at
kdb_b...
2011 Jun 02
3
Problems with descriptions.
...lg sha1;
shared-secret "password";
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
rr-conflict disconnect;
}
syncer {
rate 500M;
verify-alg sha1;
al-extents 257;
}
on st01 {
device /dev/drbd0;
disk /dev/sdb;
address 192.168.3.151:7788;
meta-disk internal;
}
on st02...
2018 Jan 24
4
geo-replication command rsync returned with 3
...oes anyone else experienced this behavior...any idea ?
best regards
Dietmar
gfs 3.12.5 geo-rep log on master :
[2018-01-24 15:50:35.347959] I [master(/brick1/mvol1):1385:crawl]
_GMaster: slave's time stime=(1516808792, 0)
[2018-01-24 15:50:35.604094] I [master(/brick1/mvol1):1863:syncjob]
Syncer: Sync Time Taken duration=0.0294??? num_files=1??? job=2???
return_code=3
[2018-01-24 15:50:35.605490] E [resource(/brick1/mvol1):210:errlog]
Popen: command returned error??? cmd=rsync -aR0 --inplace --files-from=-
--super --stats --numeric-ids --no-implied-dirs --existing --xattrs
--acls --ign...
2018 Jan 25
0
geo-replication command rsync returned with 3
...?
>
> best regards
> Dietmar
>
>
> gfs 3.12.5 geo-rep log on master :
>
> [2018-01-24 15:50:35.347959] I [master(/brick1/mvol1):1385:crawl]
> _GMaster: slave's time stime=(1516808792, 0)
> [2018-01-24 15:50:35.604094] I [master(/brick1/mvol1):1863:syncjob]
> Syncer: Sync Time Taken duration=0.0294 num_files=1 job=2
> return_code=3
> [2018-01-24 15:50:35.605490] E [resource(/brick1/mvol1):210:errlog]
> Popen: command returned error cmd=rsync -aR0 --inplace --files-from=-
> --super --stats --numeric-ids --no-implied-dirs --existing --xat...
2008 Mar 05
0
ocfs2 and another node is heartbeating in our slot
...77
ip_address = 192.168.0.86
number = 0
name = suse3
cluster = susecluster
node:
ip_port = 7777
ip_address = 192.168.0.87
number = 1
name = suse4
cluster = susecluster
drbd.conf
global { usage-count yes; }
common {
syncer {
rate 100M;
}
startup {
#degr-wfc-timeout 0;
#wfc-timeout 10;
#become-primary-on both;
}
disk {
on-io-error detach;
}
}
resource drbd0 {
protocol C;
net {...
2003 Apr 20
0
4400+ cron processes causes server crash ...
...d)
1 (named)
5 (nfsd)
7 (nsd8x)
1 (pagedaemon)
13 (perl)
2 (pine)
4 (pipe)
31 (pop3d)
1 (portmap)
280 (postgres)
1 (ps)
4 (python2.1)
34 (qmgr)
1 (rpc.statd)
2 (rsync)
1 (rwhod)
1 (scp)
4 (screen)
14 (sh)
3 (ssh)
61 (sshd)
1 (swapper)
1 (syncer)
40 (syslogd)
18 (tcsh)
11 (timsieved)
1 (upclient)
1 (vmdaemon)
1 (vnlru)
1 COMMAND
Is there any way of finding out what jails "owned" those cron jobs
*after* the crash? I know I can find out on a running systems using
proc/*/status, but what about after the server...
2003 Sep 04
0
crash dumps to ar not supported ?
...7c38d
stack pointer = 0x10:0xdea01ecc
frame pointer = 0x10:0xdea01ef4
code segment = base 0x0, limit 0xfffff, type 0x1b
= DPL 0, pres 1, def32 1, gran 1
processor eflags = interrupt enabled, resume, IOPL = 0
current process = 5 (syncer)
interrupt mask = none
trap number = 12
panic: page fault
syncing disks... 8
done
Uptime: 1d0h9m53s
dumping to dev #ar/0x20001, offset 1279168
dump failed, reason: device doesn't support a dump routine
Automatic reboot in 15 seconds - press a key on the console to abort
-...
2012 Aug 06
0
Problem with mdadm + lvm + drbd + ocfs ( sounds obvious, eh ? :) )
...opt {
device /dev/drbd0 ;
disk /dev/vg/lv_opt ;
meta-disk internal ;
net {
allow-two-primaries ;
}
startup {
become-primary-on both ;
wfc-timeout 120 ;
outdated-wfc-timeout 120 ;
degr-wfc-timeout 120 ;
}
syncer {
rate 100M ;
}
disk {
fencing resource-and-stonith ;
}
on admin1-drbd admin1 {
address 192.168.251.11:7789 ;
}
on admin2-drbd admin2 {
address 192.168.251.12:7789 ;
}
}
----
3- ocfs2 on top of drbd , configured as below:
--...