search for: 231m

Displaying 7 results from an estimated 7 matches for "231m".

Did you mean: 231
2011 Jun 01
1
How to properly read "zpool iostat -v" ? ;)
...3 10.4M 17.9K c7t3d0 - - 333 3 12.0M 17.9K c7t4d0 - - 340 3 10.7M 17.9K c7t5d0 - - 340 3 10.5M 17.9K cache - - - - - - c4t1d0p2 230M 15.8G 0 0 0 0 c4t1d0p3 231M 15.5G 0 0 0 0 ---------- ----- ----- ----- ----- ----- ----- Sometimes values are even more weird, i.e. here pool reads are even less than one component drive''s workload, not to mention all six of them: capacity operations bandwidth pool...
2007 May 09
1
dsl iso and xen
...9;] root = "/dev/sda2 ro" <configuration file> The filesystem on the Xen server is shown below. da10:/etc/xen/vm # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.6G 4.2G 5.4G 45% / udev 232M 164K 231M 1% /dev And the DSL-iso is laying in /tmp(sda2) da10:/etc/xen/vm # ls -l /tmp/dsl-3.3.iso -rw-r--r-- 1 root root 52056064 May 8 2007 /tmp/dsl-3.3.iso Ill then starts up the domain with: xm create /etc/xen/vm/dsl -c The domain starts up, but I get a kernel panic from t...
2008 Jun 30
0
[PATCH] qemu xen-console, limit buffering
...PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10281 root 15 0 76536 14m 2448 S 2 1.0 0:00.61 qemu-dm Couple minutes later: (17% mem usage) PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10281 root 16 0 291m 231m 2448 R 15 17.0 0:16.34 qemu-dm Much later: (72.8% mem usage) PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10281 root 15 0 1052m 989m 2448 S 10 72.8 1:13.25 qemu-dm Attached patch sets dom->buffer.max_capacity to xend confi...
2008 Aug 27
2
Bad file descriptor with maildir and bzip2 files
...000 11:39:18.205993 read(12, "~\214\261E*@[\200\245\302\232\201\334\331\272\225NQ\244\260\371\3473cc\312\255\326\246\17\'\222"..., 4096) = 4096 < continues reading from the file > 11:39:18.225970 read(12, "+:E\225\7\217W\264\367\27\226\262i9\34\227=\350\250BcmF\306\'\231m\265\233\337\343\237"..., 4096) = 2581 11:39:18.226040 read(12, "", 4096) = 0 < 26x more of this read > 11:39:18.233913 munmap(0xb7c68000, 3600384) = 0 11:39:18.234070 close(12) = 0 11:39:18.234120 munmap(0xb7ff2000, 4096) = 0 11:39:18.234184 fcntl64(12, F_GE...
2008 Aug 05
1
Also seeing high winbindd CPU usage
I think somebody had a similar problem (also on Solaris), but that thread seemed to die. I've compiled (with Sun Studio cc) and installed samba-3.2.1 on a Solaris 10 x64 box, which is a member of a (Windows Server 2003 controlled) domain. I previously had samba 3.0.28a running on the same machine without any problems. Now winbindd is eating up all of the CPU (on the CPU it's assigned
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate of about 400K/s. (When this pool was first set up we saw rates in the MB/s range during a scrub). Both zpool iostat and an iostat -Xn show lots of idle disk times, no above average service times, no abnormally high busy percentages. Load on the box is .59. 8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2007 Apr 18
33
LZO compression?
Hi, I don''t know if this has been discussed before, but have you thought about adding LZO compression to ZFS? One zfs-fuse user has provided a patch which implements LZO compression, and he claims better compression ratios *and* better speed than lzjb. The miniLZO library is licensed under the GPL, but the author specifically says that other licenses are available by request. Has this