Displaying 3 results from an estimated 3 matches for "121g".
Did you mean:
121
2010 Aug 19
0
Unable to mount legacy pool in to zone
...0
c2t12d0 ONLINE 0 0 0
spares
c2t13d0 INUSE currently in use
errors: No known data errors
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tol-pool 1.08T 91.5G 39.7K /tol-pool
tol-pool/db01 121G 78.7G 121G legacy
tol-pool/db02 112G 87.9G 112G legacy
tol-pool/db03 124G 75.8G 124G legacy
tol-pool/db04 110G 89.5G 110G legacy
tol-pool/db05 118G 82.1G 118G legacy
tol-pool/oracle 16.8G 13.2G 16.8G legacy
tol-pool/redo01 2.34G...
2005 Feb 03
1
help troubleshooting inconsistencies in back up sizes
...script to back up:
for i in a b c d e f g h i j k l m n o p q r s t u v w x y z `seq 0 9`; do
/usr/local/bin/rsync -a -z -W --delete /mailhome/$i/ user@backup:/mailhome/$i
done
question one would be does this look correct?
now here is the heart of the problem:
on server1 the partition is 121G
on server2 it's 110G
on server3 it's 118G
so I assume I have multiple problems here. I don't see --progress as being
usable in my case since I have such a large amount of files, how can I
debug what the differences are between these 3 body of files that doesn't
involve actually che...
2017 Aug 16
1
[ovirt-users] Recovering from a multi-node failure
On Sun, Aug 6, 2017 at 4:42 AM, Jim Kusznir <jim at palousetech.com> wrote:
> Well, after a very stressful weekend, I think I have things largely
> working. Turns out that most of the above issues were caused by the linux
> permissions of the exports for all three volumes (they had been reset to
> 600; setting them to 774 or 770 fixed many of the issues). Of course, I
>