Displaying 5 results from an estimated 5 matches for "118g".
Did you mean:
118
2010 Aug 19
0
Unable to mount legacy pool in to zone
...VAIL REFER MOUNTPOINT
tol-pool 1.08T 91.5G 39.7K /tol-pool
tol-pool/db01 121G 78.7G 121G legacy
tol-pool/db02 112G 87.9G 112G legacy
tol-pool/db03 124G 75.8G 124G legacy
tol-pool/db04 110G 89.5G 110G legacy
tol-pool/db05 118G 82.1G 118G legacy
tol-pool/oracle 16.8G 13.2G 16.8G legacy
tol-pool/redo01 2.34G 17.7G 2.34G legacy
tol-pool/redo02 2.20G 17.8G 2.20G legacy
tol-pool/redo03 1.17G 18.8G 1.17G legacy
tol-pool/redo04 1.17G 18.8G 1.17G legacy
bash-3.00# cat /etc/zones...
2005 Feb 03
1
help troubleshooting inconsistencies in back up sizes
...l m n o p q r s t u v w x y z `seq 0 9`; do
/usr/local/bin/rsync -a -z -W --delete /mailhome/$i/ user@backup:/mailhome/$i
done
question one would be does this look correct?
now here is the heart of the problem:
on server1 the partition is 121G
on server2 it's 110G
on server3 it's 118G
so I assume I have multiple problems here. I don't see --progress as being
usable in my case since I have such a large amount of files, how can I
debug what the differences are between these 3 body of files that doesn't
involve actually checking them individually? I basically want to be
in...
2010 Mar 04
8
Huge difference in reporting disk usage via du and zfs list. Fragmentation?
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10?
What except zfs send/receive can be done to free the fragmented space?
One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du.
The other ZFS was used for similar
2018 Jan 10
2
Issues accessing ZFS-shares on Linux
...sk
└─luks-sdf 254:5 0 2.7T 0 crypt
sdg 8:96 0 2.7T 0 disk
└─luks-sdg 254:6 0 2.7T 0 crypt
sdh 8:112 0 2.7T 0 disk
└─luks-sdh 254:7 0 2.7T 0 crypt
sdi 8:128 1 119.2G 0 disk
├─sdi1 8:129 1 512M 0 part /boot/efi
└─sdi2 8:130 1 118G 0 part /
root at punishedkorppu /# zpool status
pool: tank
state: ONLINE
scan: scrub repaired 0B in 18h41m with 0 errors on Mon Dec 25 17:18:44
2017
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz2-0 ONLINE 0...
2018 Jan 10
2
Issues accessing ZFS-shares on Linux
I just noticed that by running by commands /usr/sbin/smbd -D or
/usr/sbin/smbd -i without systemd's unit, all shares work perfectly so
the problem must then be somehow related to systemd.. Let the testing
continue..
I also tested what happens if I comment out everything and just use
ExecStart=/usr/sbin/smbd -D as that command worked on the console. That
did not help.
For the record, this is