search for: 145k

Displaying 19 results from an estimated 19 matches for "145k".

Did you mean: 145
2013 Mar 18
2
Disk iops performance scalability
Hi, Seeing a drop-off in iops when more vcpu''s are added:- 3.8.2 kernel/xen-4.2.1/single domU/LVM backend/8GB RAM domU/2GB RAM dom0 dom0_max_vcpus=2 dom0_vcpus_pin domU 8 cores fio result 145k iops domU 10 cores fio result 99k iops domU 12 cores fio result 89k iops domU 14 cores fio result 81k iops ioping . -c 3 4096 bytes from . (ext4 /dev/xvda1): request=1 time=0.1 ms 4096 bytes from . (ext4 /dev/xvda1): request=2 time=0.7 ms 4096 bytes from . (ext4 /dev/xvda1): request=3 time=0.8 ms...
2009 Mar 16
3
Asterisk is not designed for University with large user base?
...for small user base, I don't have experience with large scale Asterisk implementation. I know little about sipX. Does anyone in the community has any input about this? Vincent Li System Administrator BRC,UBC perl -e'print"\131e\164\040\101n\157t\150e\162\040\114i\156u\170\040\107e\145k\012"'
2014 Jun 26
7
[PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
...epth=64, bs=4K, jobs=N) is run inside VM to verify the improvement. I just create a small quadcore VM and run fio inside the VM, and num_queues of the virtio-blk device is set as 2, but looks the improvement is still obvious. 1), about scalability - without mutli-vq feature -- jobs=2, thoughput: 145K iops -- jobs=4, thoughput: 100K iops - with mutli-vq feature -- jobs=2, thoughput: 193K iops -- jobs=4, thoughput: 202K iops 2), about thoughput - without mutli-vq feature -- thoughput: 145K iops - with mutli-vq feature -- thoughput: 202K iops So in my test, even for a quad-core VM, if the v...
2014 Jun 26
7
[PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
...epth=64, bs=4K, jobs=N) is run inside VM to verify the improvement. I just create a small quadcore VM and run fio inside the VM, and num_queues of the virtio-blk device is set as 2, but looks the improvement is still obvious. 1), about scalability - without mutli-vq feature -- jobs=2, thoughput: 145K iops -- jobs=4, thoughput: 100K iops - with mutli-vq feature -- jobs=2, thoughput: 193K iops -- jobs=4, thoughput: 202K iops 2), about thoughput - without mutli-vq feature -- thoughput: 145K iops - with mutli-vq feature -- thoughput: 202K iops So in my test, even for a quad-core VM, if the v...
2014 Jun 26
0
[PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
...> verify the improvement. > > I just create a small quadcore VM and run fio inside the VM, and > num_queues of the virtio-blk device is set as 2, but looks the > improvement is still obvious. > > 1), about scalability > - without mutli-vq feature > -- jobs=2, thoughput: 145K iops > -- jobs=4, thoughput: 100K iops > - with mutli-vq feature > -- jobs=2, thoughput: 193K iops > -- jobs=4, thoughput: 202K iops > > 2), about thoughput > - without mutli-vq feature > -- thoughput: 145K iops > - with mutli-vq feature > -- thoughput: 202K iops...
2014 Jun 13
6
[RFC PATCH 0/2] block: virtio-blk: support multi vq per virtio-blk
...epth=64, bs=4K, jobs=N) is run inside VM to verify the improvement. I just create a small quadcore VM and run fio inside the VM, and num_queues of the virtio-blk device is set as 2, but looks the improvement is still obvious. 1), about scalability - without mutli-vq feature -- jobs=2, thoughput: 145K iops -- jobs=4, thoughput: 100K iops - without mutli-vq feature -- jobs=2, thoughput: 186K iops -- jobs=4, thoughput: 199K iops 2), about thoughput - without mutli-vq feature -- top thoughput: 145K iops - with mutli-vq feature -- top thoughput: 199K iops So even for one quadcore VM, if th...
2014 Jun 13
6
[RFC PATCH 0/2] block: virtio-blk: support multi vq per virtio-blk
...epth=64, bs=4K, jobs=N) is run inside VM to verify the improvement. I just create a small quadcore VM and run fio inside the VM, and num_queues of the virtio-blk device is set as 2, but looks the improvement is still obvious. 1), about scalability - without mutli-vq feature -- jobs=2, thoughput: 145K iops -- jobs=4, thoughput: 100K iops - without mutli-vq feature -- jobs=2, thoughput: 186K iops -- jobs=4, thoughput: 199K iops 2), about thoughput - without mutli-vq feature -- top thoughput: 145K iops - with mutli-vq feature -- top thoughput: 199K iops So even for one quadcore VM, if th...
2009 Jan 22
1
oslec + dahdi
Hi list, I install dahdi-linux successfully with the module of oslec for the echo, but when I specify it in the system.conf the echo canceller oslec it shows me errors: DAHDI_ATTACH_ECHOCAN failed on channel 4: Invalid argument (22) I see that the echo cancellers is supported: mg2, kb1, sec2, and sec because oslec is not supported?, but he has support to compile it with dahdi_linux! best
2010 Mar 02
9
Filebench Performance is weird
...is a snapshot of my arcstat output in case of high throughput --- notice the 100% hits ratio arcsz,read,hits,Hit%,miss,miss%,dhit,dh%,dmis,dm%,phit,ph%,pmis,pm%,mhit,mh%,mmis,mm%,mfug,mrug, 1G, 31M, 31M, 99,111K, 0, 28M, 99, 99K, 0, 2M, 99, 12K, 0, 1M, 98, 13K, 1, 43, 43, 1G,147K,145K, 99, 1K, 0, 14K, 99, 2, 0,131K, 99, 1K, 0, 0, 0, 0, 0, 0, 0, 1G,166K,166K, 100, 0, 0, 37K,100, 0, 0,128K,100, 0, 0, 0, 0, 0, 0, 0, 0, 1G, 42K, 42K, 100, 0, 0, 42K,100, 0, 0, 256,100, 0, 0, 0, 0, 0, 0, 0, 0, 1G, 42K, 42K, 100, 0,...
2007 Oct 05
1
Very slow file copy performance over a WAN (HELP)
...1) I'm getting incredibly slow file copy performance. Using smbclient on a linux machine on one size of the WAN, As you can imagine, this makes all of our file shares unusable over the WAN. It's not an issue with WAN performance, because using scp to transfer the same file, I get speeds of ~145k/s. A tcpdump of the file copy of ~2MB file that actually times out with the following error is up at: http://emagiccards.com/james/copyfileusingsmbclient.tar.bz2 -- Timeout Error -- Short read when getting file \Finance\monthly_reports\Aug07.xls. Only got 967680 bytes. Error Call timed out: serve...
2003 Aug 14
1
RELENG_4_8 isos?
I've built RELENG_4 isos no problem, is there a trick to building the RELENG_4_8 isos? I am building the 'miniinst' disk - that is, with the release, docs, and a port tree, but no ports. This is so I can do a fresh install without having to update the world... I am using a script that looks like this... #!/bin/sh rm -rf /usr/obj cd /usr/src make -DNOCLEAN world kernel cd
2014 Jun 20
3
[PATCH v1 0/2] block: virtio-blk: support multi vq per virtio-blk
..., bs=4K, jobs=N) is run inside VM to verify the improvement. I just create a small quadcore VM and run fio inside the VM, and num_queues of the virtio-blk device is set as 2, but looks the improvement is still obvious. 1), about scalability - without mutli-vq feature -- jobs=2, thoughput: 145K iops -- jobs=4, thoughput: 100K iops - with mutli-vq feature -- jobs=2, thoughput: 186K iops -- jobs=4, thoughput: 199K iops 2), about thoughput - without mutli-vq feature -- top thoughput: 145K iops - with mutli-vq feature -- top thoughput: 199K iops So in my te...
2014 Jun 20
3
[PATCH v1 0/2] block: virtio-blk: support multi vq per virtio-blk
..., bs=4K, jobs=N) is run inside VM to verify the improvement. I just create a small quadcore VM and run fio inside the VM, and num_queues of the virtio-blk device is set as 2, but looks the improvement is still obvious. 1), about scalability - without mutli-vq feature -- jobs=2, thoughput: 145K iops -- jobs=4, thoughput: 100K iops - with mutli-vq feature -- jobs=2, thoughput: 186K iops -- jobs=4, thoughput: 199K iops 2), about thoughput - without mutli-vq feature -- top thoughput: 145K iops - with mutli-vq feature -- top thoughput: 199K iops So in my te...
2006 Jun 12
3
ZFS + Raid-Z pool size incorrect?
...ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 c2t2d0 ONLINE 0 0 0 errors: No known data errors bash-3.00# zfs list NAME USED AVAIL REFER MOUNTPOINT sata 145K 825G 49K /sata bash-3.00# zpool destroy -f sata bash-3.00# zpool create sata mirror c2t0d0 c2t1d0 bash-3.00# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT sata 278G 52.5K 278G 0% ONLINE - bash-3.00# zpool status pool: sat...
2006 May 12
1
zfs panic when unpacking open solaris source
...nute_22 625K - 1.98G - home/cjg at minute_23 565K - 1.98G - home/cjg at minute_24 470K - 1.98G - home/cjg at minute_25 495K - 1.98G - home/cjg at minute_26 305K - 1.98G - home/cjg at minute_27 314K - 1.98G - home/cjg at minute_28 145K - 1.98G - home/cjg at minute_29 266K - 1.98G - home/cjg at minute_30 438K - 1.98G - home/cjg at minute_31 584K - 1.98G - home/cjg at minute_32 524K - 1.98G - home/cjg at minute_33 538K - 1.98G - home/cjg at minute_34 564K - 1.9...
2009 Apr 08
1
Call Pickup Works w/Linksys ATA, not with Cisco 7940G
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#ffffff" text="#000000"> I have an Asterisk 1.4.18 with a mix of cordless phones connected using Linksys SPA2102 ATAs and Cisco 7940G
2009 Dec 08
28
wow runtime error, new with patch 3.3
this error occurs at the login screen. shortly after entering password, and hitting enter. game worked prepatch. 3.2 microsoft visual C++ runtime library Runtime error! Program c:\program files\world of warcraft\wow.exe R6034 An Application has made an attempt to load the C runtime library incorrectly. Please contact the applications support team for more information. thanks
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
...SI core. Using a KVM guest with 32x vCPUs and 4G memory, the results for 4x random I/O now look like: workload | jobs | 25% write / 75% read | 75% write / 25% read -----------------|------|----------------------|--------------------- 1x rd_mcp LUN | 8 | ~155K IOPs | ~145K IOPs 16x rd_mcp LUNs | 16 | ~315K IOPs | ~305K IOPs 32x rd_mcp LUNs | 16 | ~425K IOPs | ~410K IOPs The full fio randrw results for the six test cases are attached below. Also, using a workload of fio numjobs > 16 currently makes performance start to fall off...
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
...SI core. Using a KVM guest with 32x vCPUs and 4G memory, the results for 4x random I/O now look like: workload | jobs | 25% write / 75% read | 75% write / 25% read -----------------|------|----------------------|--------------------- 1x rd_mcp LUN | 8 | ~155K IOPs | ~145K IOPs 16x rd_mcp LUNs | 16 | ~315K IOPs | ~305K IOPs 32x rd_mcp LUNs | 16 | ~425K IOPs | ~410K IOPs The full fio randrw results for the six test cases are attached below. Also, using a workload of fio numjobs > 16 currently makes performance start to fall off...