search for: 155k

Displaying 6 results from an estimated 6 matches for "155k".

Did you mean: 1556
2009 Sep 10
1
Wine: Bug 12524 - MBT Navigator: can't create object: ADOCommand
...6.126|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 100182 (98K) [text/plain] Saving to: `winetricks' 100%[==============================================================================================================================================>] 100,182 155K/s in 0.6s 2009-09-09 22:44:54 (155 KB/s) - `winetricks' saved [100182/100182] XIO: fatal IO error 9 (Bad file descriptor) on X server ":0.0" XIO: fatal IO error 9 (Bad file descriptor) on X server ":0.0" after 73 requests (72 known processed) with 0 events remain...
2006 Aug 24
5
unaccounted for daily growth in ZFS disk space usage
We finally flipped the switch on one of our ZFS-based servers, with approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is a RAID5 volume on the adaptec card). We have snapshots every 4 hours for the first few days. If you add up the snapshot references it appears somewhat high versus daily use (mostly mail boxes, spam, etc changing), but say an aggregate of no more than 400+MB a
2004 Dec 30
0
MultipleIPĀ“s in one Zone
...Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DROP !icmp -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID 99 4761 eth1_fwd all -- eth1 * 0.0.0.0/0 0.0.0.0/0 736 155K eth0_fwd all -- eth0 * 0.0.0.0/0 0.0.0.0/0 579 68667 eth2_fwd all -- eth2 * 0.0.0.0/0 0.0.0.0/0 0 0 Reject all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 reject all -- * * 0.0.0.0/0 0.0.0.0/0 Ch...
2006 Feb 24
17
Re: [nfs-discuss] bug 6344186
Joseph Little wrote: > I''d love to "vote" to have this addressed, but apparently votes for > bugs are no available to outsiders. > > What''s limiting Stanford EE''s move to using ZFS entirely for our > snapshoting filesystems and multi-tier storage is the inability to > access .zfs directories and snapshots in particular on NFSv3 clients.
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
...=1 via virtio-scsi to SCSI core. Using a KVM guest with 32x vCPUs and 4G memory, the results for 4x random I/O now look like: workload | jobs | 25% write / 75% read | 75% write / 25% read -----------------|------|----------------------|--------------------- 1x rd_mcp LUN | 8 | ~155K IOPs | ~145K IOPs 16x rd_mcp LUNs | 16 | ~315K IOPs | ~305K IOPs 32x rd_mcp LUNs | 16 | ~425K IOPs | ~410K IOPs The full fio randrw results for the six test cases are attached below. Also, using a workload of fio numjobs > 16 currently makes perfor...
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
...=1 via virtio-scsi to SCSI core. Using a KVM guest with 32x vCPUs and 4G memory, the results for 4x random I/O now look like: workload | jobs | 25% write / 75% read | 75% write / 25% read -----------------|------|----------------------|--------------------- 1x rd_mcp LUN | 8 | ~155K IOPs | ~145K IOPs 16x rd_mcp LUNs | 16 | ~315K IOPs | ~305K IOPs 32x rd_mcp LUNs | 16 | ~425K IOPs | ~410K IOPs The full fio randrw results for the six test cases are attached below. Also, using a workload of fio numjobs > 16 currently makes perfor...