Displaying 3 results from an estimated 3 matches for "228g".
Did you mean:
228
2011 Aug 11
6
unable to mount zfs file system..pl help
...2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
# rpm -qa|grep zfs
zfs-test-0.5.2-1
zfs-modules-0.5.2-1_2.6.18_194.el5
zfs-0.5.2-1
zfs-modules-devel-0.5.2-1_2.6.18_194.el5
zfs-devel-0.5.2-1
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 120K 228G 21K /pool1
pool1/fs1 21K 228G 21K /vik
[root at nofclo038]/# zfs get all pool1/fs1
NAME PROPERTY VALUE SOURCE
pool1/fs1 type filesystem -
pool1/fs1 creation Fri Aug 12 1:44 2011 -
pool1/fs1 used...
2010 Nov 04
1
orphan inodes deleted issue
...array:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md1 9.5G 735M 8.3G 8% /
/dev/md7 38G 6.7G 30G 19% /var
/dev/md6 15G 4.5G 9.1G 33% /usr
/dev/md5 103G 45G 54G 46% /backup
/dev/md3 284G 42G 228G 16% /home
/dev/md2 2.0G 214M 1.7G 12% /tmp
/dev/md0 243M 24M 207M 11% /boot
I've been searching on google but I can't find the explanation for this
problem. It's a BUG? :D
Thank You very much :D
--
-
--
Best regards,
David
http://blog.pnyet.web.id
2008 Jul 06
14
confusion and frustration with zpool
I have a zpool which has grown "organically". I had a 60Gb disk, I added a 120, I added a 500, I got a 750 and sliced it and mirrored the other pieces.
The 60 and the 120 are internal PATA drives, the 500 and 750 are Maxtor OneTouch USB drives.
The original system I created the 60+120+500 pool on was Solaris 10 update 3, patched to use ZFS sometime last fall (November I believe). In