search for: 278g

Displaying 5 results from an estimated 5 matches for "278g".

Did you mean: 278
2010 Mar 04
8
Huge difference in reporting disk usage via du and zfs list. Fragmentation?
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10? What except zfs send/receive can be done to free the fragmented space? One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du. The other ZFS was used for similar
2006 Jun 12
3
ZFS + Raid-Z pool size incorrect?
...USED AVAIL REFER MOUNTPOINT sata 145K 825G 49K /sata bash-3.00# zpool destroy -f sata bash-3.00# zpool create sata mirror c2t0d0 c2t1d0 bash-3.00# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT sata 278G 52.5K 278G 0% ONLINE - bash-3.00# zpool status pool: sata state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM sata ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t0d0 ONLINE 0...
2010 Jul 09
2
snapshot out of space
...wing erorr message when trying to do a zfs snapshot: root at pluto#zfs snapshot datapool/mars at backup1 cannot create snapshot ''datapool/mars at backup1'': out of space root at pluto#zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT datapool 556G 110G 446G 19% ONLINE - rpool 278G 12.5G 265G 4% ONLINE - Any ideas??? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100708/850cec7f/attachment.html>
2007 Apr 14
3
zfs snaps and removing some files
...c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 errors: No known data errors it does give me total of [11:32:55] root at chrysek: /d/d2 > zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT mypool 278G 271G 6.75G 97% ONLINE - I am using around 150 gig of that 278 gig that I have and disk is 99% full [11:33:58] root at chrysek: /d/d2 > df -k . Filesystem 1k-blocks Used Available Use% Mounted on mypool/d 152348400 149829144 2519256 99% /d/d2 I am ta...
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance. I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.). According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would