similar to: HTB Message Storm HTB Delay <large number> > 5 secs

Displaying 20 results from an estimated 200 matches similar to: "HTB Message Storm HTB Delay <large number> > 5 secs"

2006 Aug 02
10
[PATCH 0/6] htb: cleanup
The HTB scheduler code is a mess, this patch set does some basic house cleaning. The first four should cause no code change, but the last two need more testing. -- Stephen Hemminger <shemminger@osdl.org> "And in the Packet there writ down that doome" - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to
2010 Jul 19
22
zfs send to remote any ideas for a faster way than ssh?
I''ve tried ssh blowfish and scp arcfour. both are CPU limited long before the 10g link is. I''vw also tried mbuffer, but I get broken pipe errors part way through the transfer. I''m open to ideas for faster ways to to either zfs send directly or through a compressed file of the zfs send output. For the moment I; zfs send > pigz scp arcfour the file gz file to the
2012 Dec 14
12
any more efficient way to transfer snapshot between two hosts than ssh tunnel?
Assuming in a secure and trusted env, we want to get the maximum transfer speed without the overhead from ssh. Thanks. Fred -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20121213/654f543f/attachment-0001.html>
2006 Jun 15
0
[PATCH 1/2] Runtime configuration of HTB''s HYSTERESIS option (kernel)
The HTB qdisc has a compile time option, HTB_HYSTERESIS, that trades accuracy of traffic classification for CPU time. These patches change hysteresis to be a runtime option under the control of "tc". The effects of HYSTERESIS on HTB''s accuracy are significant (see chapter 7, section 7.3.1, pp 69-70 in Jesper Brouer''s thesis: http://www.adsl-optimizer.dk/thesis/ ),
2008 Dec 08
5
How to use mbuffer with zfs send/recv
>> How do i compile mbuffer for our system, Thanks to Mike Futerko for help with the compile, i now have it installed OK. >> and what syntax to i use to invoke it within the zfs send recv? Still looking for answers to this one? Any example syntax, gotchas etc would be much appreciated. -- Kind regards, Jules free. open. honest. love. kindness. generosity. energy. frenetic.
2017 Jun 19
2
CloneFunctionInto produces invalid debug info
- old Keno +current Keno > On Jun 19, 2017, at 2:59 PM, Adrian Prantl <aprantl at apple.com> wrote: > > In your example the instructions in the cloned function have debug locations belonging to a different function, and the function itself is missing a DISubprogram metadata attachment. > >> (lldb) p OldFunc->dump() >> >> ; Function Attrs: nounwind optsize
2010 May 20
13
send/recv over ssh
I know i''m probably doing something REALLY stupid.....but for some reason i can''t get send/recv to work over ssh. I just built a new media server and i''d like to move a few filesystem from my old server to my new server but for some reason i keep getting strange errors... At first i''d see something like this: pfexec: can''t get real path of
2017 Jun 16
2
CloneFunctionInto produces invalid debug info
The if you are cloning into the same LLVM module the CU should not cloned. If don't mind sharing your code, I can try to help diagnose why the CU gets cloned... just send me a patch that applies to trunk and instructions. -- adrian > On Jun 16, 2017, at 1:54 PM, Sergei Larin <slarin at codeaurora.org> wrote: > > Sorry… It takes a pass that was not accepted for upstreaming….
2010 Feb 02
7
Help needed with zfs send/receive
Hi folks, I''m having (as the title suggests) a problem with zfs send/receive. Command line is like this : pfexec zfs send -Rp tank/tsm@snapshot | ssh remotehost pfexec zfs recv -v -F -d tank This works like a charm as long as the snapshot is small enough. When it gets too big (meaning somewhere between 17G and 900G), I get ssh errors (can''t read from remote host). I tried
2017 Jun 20
2
CloneFunctionInto produces invalid debug info
I was just going to say: With well-formed debug info it should create a deep copy up until the DISubprogram, but no further. But because the DISubprogram linked to the Function is missing the special handling of the DISubprogram (that would prohibit cloning the DICompileUnit is side-stepped). But then I remembered the discussion we had in
2008 Nov 06
45
''zfs recv'' is very slow
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 hi, i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6). i''m using ''zfs send -i'' to replicate changes on A to B. however, the ''zfs recv'' on B is running extremely slowly. if i run the zfs send on A and redirect output to a file, it sends at 2MB/sec. but when i use ''zfs send
2009 Jan 07
9
''zfs recv'' is very slow
On Wed 07/01/09 20:31 , Carsten Aulbert carsten.aulbert at aei.mpg.de sent: > Brent Jones wrote: > > > > Using mbuffer can speed it up dramatically, but > > this seems like a hack> without addressing a real problem with zfs > > send/recv.> Trying to send any meaningful sized snapshots > > from say an X4540 takes> up to 24 hours, for as little as 300GB
2010 Jun 25
11
Maximum zfs send/receive throughput
It seems we are hitting a boundary with zfs send/receive over a network link (10Gb/s). We can see peak values of up to 150MB/s, but on average about 40-50MB/s are replicated. This is far away from the bandwidth that a 10Gb link can offer. Is it possible, that ZFS is giving replication a too low priority/throttling it too much?
2012 Jul 11
1
Decoding a continues stream
Hi, I've trying to decode a FLAC audio stream. I have a reader which sends raw byte data to my FLAC wrapper class. Only once the decode function below returns, the reader will send new data. Hence I want to decode until the stream is empty, but I will add new data to the stream once it is empty. *void **MyFlacCoder::**decode(char *data, int bytes) { mBuffer = input;
2009 Nov 20
2
ZFS Send Priority and Performance
I have several X4540 Thor systems with one large zpool that replicate data to a backup host via zfs send/recv. The process works quite well when there is little to no usage on the source systems. However when the source systems are under usage replication slows down to a near crawl. Without load replication streams along usually near 1 Gbps but drops down to anywhere between 0 - 5000
2010 Oct 02
3
out of HDD space - zfs degraded
Overnight I was running a zfs send | zfs receive (both within the same system / zpool). The system ran out of space, a drive went off line, and the system is degraded. This is a raidz2 array running on FreeBSD 8.1-STABLE #0: Sat Sep 18 23:43:48 EDT 2010. The following logs are also available at http://www.langille.org/tmp/zfs-space.txt <- no line wrapping This is what was running: #
2010 May 28
6
zfs send/recv reliability
After looking through the archives I haven''t been able to assess the reliability of a backup procedure which employs zfs send and recv. Currently I''m attempting to create a script that will allow me to write a zfs stream to a tape via tar like below. # zfs send -R pool at something | tar -c > /dev/tape I''m primarily concerned with in the possibility
2011 Mar 11
1
UDP Perfomance tuning
Hi, We are running on 5.5 on a HP ProLiant DL360 G6. Kernel version is 2.6.18-194.17.1.el5 (we had also tested with the latest available kernel kernel-2.6.18-238.1.1.el5.x86_64) We running some performance tests using the "iperf" utility. We are seeing very bad and inconsistent performance on the UDP testing. The maximum we could get, was 440 Mbits/sec, and it varies from 250 to 440
2004 Jun 17
2
[PATCH] (3/4) delay scheduler race with device stopped
The delay scheduler dequeue routine has some code cut&pasted from the TBF scheduler that caused a race with E1000 when ring got full. It looks like net schedulers should never be calling netif_queue_stopped because the queue may get unstopped by interrrupt or receive soft irq (NAPI) which races with the dequeue in the transmit scheduler. Also, if requeuing the packet fails, it is probably
2007 Sep 02
4
Performance Issues
My apology for cross posting We have a DELL6850 with 8Gbytes of memory, four 3.2Ghz CPU's , perc 4 raid controller, with fourteen 300Gbyte 10Krpm disk on a powervault 220s, And a powervault 124T LTO-3 tape systems on a separate 160Mbyte/sec adaptec SCSI card. The disks are configured as two 2Tbyte raid 0 partitions using the perc 4 hardware. The problem is - reading from the disk, and