Displaying 6 results from an estimated 6 matches for "110g".
Did you mean:
110
2010 Aug 19
0
Unable to mount legacy pool in to zone
...bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tol-pool 1.08T 91.5G 39.7K /tol-pool
tol-pool/db01 121G 78.7G 121G legacy
tol-pool/db02 112G 87.9G 112G legacy
tol-pool/db03 124G 75.8G 124G legacy
tol-pool/db04 110G 89.5G 110G legacy
tol-pool/db05 118G 82.1G 118G legacy
tol-pool/oracle 16.8G 13.2G 16.8G legacy
tol-pool/redo01 2.34G 17.7G 2.34G legacy
tol-pool/redo02 2.20G 17.8G 2.20G legacy
tol-pool/redo03 1.17G 18.8G 1.17G legacy
tol-pool/redo04 1.17G...
2005 Feb 03
1
help troubleshooting inconsistencies in back up sizes
...in a b c d e f g h i j k l m n o p q r s t u v w x y z `seq 0 9`; do
/usr/local/bin/rsync -a -z -W --delete /mailhome/$i/ user@backup:/mailhome/$i
done
question one would be does this look correct?
now here is the heart of the problem:
on server1 the partition is 121G
on server2 it's 110G
on server3 it's 118G
so I assume I have multiple problems here. I don't see --progress as being
usable in my case since I have such a large amount of files, how can I
debug what the differences are between these 3 body of files that doesn't
involve actually checking them individually?...
2009 Nov 11
0
libzfs zfs_create() fails on sun4u daily bits (daily.1110)
...40.2G 93.7G 66K /rpool
rpool/ROOT 8.43G 93.7G 21K legacy
rpool/ROOT/snv_126 8.43G 93.7G 8.43G /
rpool/dump 15.9G 93.7G 15.9G -
rpool/export 44K 93.7G 23K /export
rpool/export/home 21K 93.7G 21K /export/home
rpool/swap 15.9G 110G 4.59M -
root t5120-sfb-01 [23:04:46 0]# zfs create rpool/export/test
root t5120-sfb-01 [23:04:56 0]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 40.2G 93.7G 66K /rpool
rpool/ROOT 8.43G 93.7G 21K legacy
rpool/ROOT/snv_126 8.43G 93.7G 8.4...
2010 Jul 09
2
snapshot out of space
I am getting the following erorr message when trying to do a zfs snapshot:
root at pluto#zfs snapshot datapool/mars at backup1
cannot create snapshot ''datapool/mars at backup1'': out of space
root at pluto#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
datapool 556G 110G 446G 19% ONLINE -
rpool 278G 12.5G 265G 4% ONLINE -
Any ideas???
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100708/850cec7f/attachment.html>
2005 Jun 19
1
ext3 offline resizing
Hi all,
I want to setup a linux workstation with FC4 and with
all the partitions (except for /boot) under LVM to be
able to resize them in future. I don't need online
resizing, I can shutdown the system and reboot with
the rescuecd when needed.
I have done some test on this configuration and I have
sverals doubts:
If i format a partition with the resize_inode feature
enabled and I resize it
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.).
According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would