search for: 141g

Displaying 10 results from an estimated 10 matches for "141g".

Did you mean: 141
2009 Apr 15
3
MySQL On ZFS Performance(fsync) Problem?
...211488 5619804 0 12 0 0 0 0 0 0 508 509 508 4341 9636 17853 0 1 99 ^C [root at ssd /data/mysqldata3]#zpool iostat data 1 capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- data 141G 37.9G 4 51 144K 3.15M data 141G 37.9G 1 1.50K 11.9K 6.06M data 141G 37.9G 0 1.37K 0 5.48M data 141G 37.9G 0 1.49K 0 5.98M data 141G 37.9G 214 1.45K 5.22M 7.27M data 141G 37.9G 0 1.37K 0 5.48M...
2006 Aug 24
5
unaccounted for daily growth in ZFS disk space usage
We finally flipped the switch on one of our ZFS-based servers, with approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is a RAID5 volume on the adaptec card). We have snapshots every 4 hours for the first few days. If you add up the snapshot references it appears somewhat high versus daily use (mostly mail boxes, spam, etc changing), but say an aggregate of no more than 400+MB a
2009 May 04
2
FW: Oracle 9204 installation on linux x86-64 on ocfs
.../mapper/VolGroup00-LogVol04 2.0G 53M 1.9G 3% /tmp /dev/mapper/VolGroup00-LogVol02 3.0G 1.8G 1.1G 64% /usr /dev/mapper/VolGroup00-LogVol03 2.0G 94M 1.8G 5% /var /dev/mapper/VolGroup01-u01 148G 93M 141G 1% /u01 /dev/sdc 600G 1.1G 599G 1% /u02 /dev/sdd 300G 1.1G 299G 1% /u03 /dev/sde 1.0G 274M 751M 27% /u04/quorum /dev/sdf 1.0G 262M 763M 26% /u05 [root at s602749nj3el19 bin]# cd /u04/quorum/ [root at s602749nj3el19 quorum]# ls -ltr...
2008 Apr 01
2
strange error in df -h
Hi All, I just saw this in output from df -h: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 131G 4.6G 120G 4% / /dev/sdc1 271G 141G 117G 55% /home /dev/sdd1 271G 3.9G 253G 2% /home/admin /dev/sda1 99M 20M 74M 22% /boot tmpfs 442M 0 442M 0% /dev/shm /dev/hda 11M 11M 0 100% /media/TestCD df: `status': No such file or directory df: `status': No...
2013 May 21
0
ALERT! /dev/xvda2 does not exist. Dropping to a shell!
...t ext4 4,6G 2,4G 2,3G 52% / tmpfs tmpfs 5,0M 0 5,0M 0% /run/lock tmpfs tmpfs 148M 0 148M 0% /run/shm /dev/mapper/disco--xen--server-xen--server--disco ext4 223G 83G 141G 38% /disco root@xen-servidor:~# test.cfg # # Kernel + memory size # kernel = ''/boot/vmlinuz-3.2.0-4-amd64'' ramdisk = ''/boot/initrd.img-3.2.0-4-amd64'' vcpus = ''1'' memory = ''128'' # # Disk device(s). # r...
2010 Mar 04
8
Huge difference in reporting disk usage via du and zfs list. Fragmentation?
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10? What except zfs send/receive can be done to free the fragmented space? One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du. The other ZFS was used for similar
2009 Jul 18
4
grep: /proc/xen/capabilities: No such file or directory
I just setup a new laptop with - Xen-unstable (http://xenbits.xensource.com/xen-unstable.git), installed via "make xen", "make install-xen", "make tools", "make install-tools" - dom0 kernel 2.6.30-rc6-tip (from git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git, changed to bleeding edge via "git checkout origin/xen-tip/next b xen tip/next
2009 Aug 25
41
snv_110 -> snv_121 produces checksum errors on Raid-Z pool
I have a 5-500GB disk Raid-Z pool that has been producing checksum errors right after upgrading SXCE to build 121. They seem to be randomly occurring on all 5 disks, so it doesn''t look like a disk failure situation. Repeatingly running a scrub on the pools randomly repairs between 20 and a few hundred checksum errors. Since I hadn''t physically touched the machine, it seems a
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance. I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.). According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate of about 400K/s. (When this pool was first set up we saw rates in the MB/s range during a scrub). Both zpool iostat and an iostat -Xn show lots of idle disk times, no above average service times, no abnormally high busy percentages. Load on the box is .59. 8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.