search for: 406m

Displaying 5 results from an estimated 5 matches for "406m".

Did you mean: 406
2007 Oct 26
1
data error in dataset 0. what''s that?
...destroyed zpool B. I managed to get zpool A back and all of my data appears to be there, but I have a data error which I am unable to track. Apparently it is: DATASET 0, OBJECT a, RANGE lvl=0 blkid=0 I have found this line from output of zdb: Dataset mos [META], ID 0, cr_txg 4, last_txg 1759562, 406M, 362 objects Is anyone able to shed any light on where this error might be and what I might be able to do about it? I do not have a backup of this data so restoring is not an option. Any advice appreciated. Thanks, Matt This message posted from opensolaris.org
2005 Sep 24
0
[Bug 3116] New: large tar files: 1 gig size: retransmitted: rsync_rsh
...Sep 23 01:01 host1-zip-start.txt -rwx------ 1 backup backup 836K Sep 23 01:01 host2-backup-etc.zip -rwx------ 1 backup backup 513M Sep 23 01:04 host2-backup-home-a2e.zip -rwx------ 1 backup backup 393M Sep 23 01:06 host2-backup-home-f2j.zip -rwx------ 1 backup backup 406M Sep 23 01:08 host2-backup-home-k2o.zip -rwx------ 1 backup backup 375M Sep 23 01:10 host2-backup-home-p2t.zip -rwx------ 1 backup backup 130M Sep 23 01:10 host2-backup-home-u2z.zip -rwx------ 1 backup backup 11M Sep 23 01:01 host2-backup-mysql.zip -rwx------ 1 backup back...
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance. I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.). According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2012 Jun 15
4
Resizing est4 filesystem while mounted
Greetings - I had a logical volume that was running out of space on a virtual machine. I successfully expanded the LV using lvextend, and lvdisplay shows that it has been expanded. Then I went to expand the filesystem to fill the new space (# resize2fs -p /dev/vde1) and I get the results that the filesystem is already xx blocks long, nothing to do. If I do a # df -h, I can see that the
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate of about 400K/s. (When this pool was first set up we saw rates in the MB/s range during a scrub). Both zpool iostat and an iostat -Xn show lots of idle disk times, no above average service times, no abnormally high busy percentages. Load on the box is .59. 8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.