Displaying 15 results from an estimated 15 matches for "usedbydataset".
2009 Oct 30
0
Does refreservation sit on top of usedbydataset?
...fault
zsan0/fs/bar-001__refreservation________400G__________________ local
zsan0/fs/bar-001__primarycache__________all____________________default
zsan0/fs/bar-001__secondarycache________all____________________default
zsan0/fs/bar-001__usedbysnapshots______ 0______________________-
zsan0/fs/bar-001__usedbydataset________ 280G__________________ -
zsan0/fs/bar-001__usedbychildren________0______________________-
zsan0/fs/bar-001__usedbyrefreservation__400G__________________ -
zfs list -t snapshot | grep bar
zsan0/fs/bar-001 at zfs-auto-snap:zsan03_1day_keep60days-2009-10-30-16:08_____ 0___-_ 280G_-
My concer...
2011 Jun 30
1
cross platform (freebsd) zfs pool replication
...tepool/users refreservation none default
remotepool/users primarycache all default
remotepool/users secondarycache all default
remotepool/users usedbysnapshots 0 -
remotepool/users usedbydataset 9.06G -
remotepool/users usedbychildren 0 -
remotepool/users usedbyrefreservation 0 -
remotepool/users logbias latency default
remotepool/users dedup off...
2011 Aug 11
6
unable to mount zfs file system..pl help
...none default
pool1/fs1 refreservation none default
pool1/fs1 primarycache all default
pool1/fs1 secondarycache all default
pool1/fs1 usedbysnapshots 0 -
pool1/fs1 usedbydataset 21K -
pool1/fs1 usedbychildren 0 -
pool1/fs1 usedbyrefreservation 0 -
pool1/fs1 logbias latency default
pool1/fs1 dedup off default
pool1/fs1 mlslabel...
2010 Oct 01
1
File permissions getting destroyed with M$ software on ZFS
...ult
fsdata/admin/ENS refreservation none default
fsdata/admin/ENS primarycache all default
fsdata/admin/ENS secondarycache all default
fsdata/admin/ENS usedbysnapshots 0 -
fsdata/admin/ENS usedbydataset 73.6G -
fsdata/admin/ENS usedbychildren 0 -
fsdata/admin/ENS usedbyrefreservation 0 -
Has there been any other development on this issue?
--
C. J. Keist Email: cj.keist at colostate.edu
Systems Grou...
2013 Mar 06
0
where is the free space?
...default
tank/lxc/tipper/brick1 primarycache all default
tank/lxc/tipper/brick1 secondarycache all default
tank/lxc/tipper/brick1 usedbysnapshots 0 -
tank/lxc/tipper/brick1 usedbydataset 16.4G -
tank/lxc/tipper/brick1 usedbychildren 0 -
tank/lxc/tipper/brick1 usedbyrefreservation 0 -
tank/lxc/tipper/brick1 logbias latency def...
2010 Jun 16
0
files lost in the zpool - retrieval possible ?
...tion none default
rpool primarycache all default
rpool secondarycache all default
rpool usedbysnapshots 0 -
rpool usedbydataset 81K -
rpool usedbychildren 11,5G -
rpool usedbyrefreservation 0 -
rpool org.opensolaris.caiman:install ready local
# do a scrub:
jack at...
2010 Jun 08
1
ZFS Index corruption and Connection reset by peer
...default
zhome/username refreservation none default
zhome/username primarycache all default
zhome/username secondarycache all default
zhome/username usedbysnapshots 0 -
zhome/username usedbydataset 750M -
zhome/username usedbychildren 0 -
zhome/username usedbyrefreservation 0 -
Other problem, that I have been unable to solve so far, is that a lot of
entries show up in my logs about :
dovecot: imap-login: net_disc...
2009 Aug 21
9
Not sure how to do this in zfs
Hello all,
I''ve tried changing all kinds of attributes for the zfs''s, but I can''t
seem to find the right configuration.
So I''m trying to move some zfs''s under another, it looks like this:
/pool/joe_user move to /pool/homes/joe_user
I know I can do this with zfs rename, and everything is fine. The problem
I''m having is, when I mount
2009 Oct 15
8
sub-optimal ZFS performance
Hello,
ZFS is behaving strange on a OSOL laptop, your thoughts are welcome.
I am running OSOL on my laptop, currently b124 and i found that the
performance of ZFS is not optimal in all situations. If i check the
how much space the package cache for pkg(1) uses, it takes a bit
longer on this host than on comparable machine to which i transferred
all the data.
user at host:/var/pkg$ time
2010 Aug 03
1
snapshot space - miscalculation?
...959G -
daten/backups usedbysnapshots 523G -
daten/backups usedbydataset 437G -
daten/backups usedbychildren 0 -
daten/backups...
2012 Feb 18
6
Cannot mount encrypted filesystems.
Looking for help regaining access to
encrypted ZFS file systems that
stopped accepting the encryption key.
I have a file server with a setup
as follows:
Solaris 11 Express 1010.11/snv_151a
8 x 2-TB disks, each one divided
into three equal size partitions,
three raidz3 pools built from a
"slice" across matching partitions:
Disk 1 Disk 8 zpools
+--+ +--+
|p1| .. |p1| <-
2009 Nov 11
0
libzfs zfs_create() fails on sun4u daily bits (daily.1110)
...default
rpool/export refreservation none default
rpool/export primarycache all default
rpool/export secondarycache all default
rpool/export usedbysnapshots 0 -
rpool/export usedbydataset 688M -
rpool/export usedbychildren 21K -
rpool/export usedbyrefreservation 0 -
rpool/export logbias latency default
rpool/export dedup off default
rpool/expo...
2010 Oct 11
0
Ubuntu iSCSI install to COMSTAR zfs volume Howto
...default
tank/export/iscsi/acer-ubuntu primarycache all default
tank/export/iscsi/acer-ubuntu secondarycache all default
tank/export/iscsi/acer-ubuntu usedbysnapshots 0 -
tank/export/iscsi/acer-ubuntu usedbydataset 54.5K -
tank/export/iscsi/acer-ubuntu usedbychildren 0 -
tank/export/iscsi/acer-ubuntu usedbyrefreservation 0 -
tank/export/iscsi/acer-ubuntu logbias latency default
tank/export/iscsi/acer...
2009 Sep 02
6
SXCE 121 Kernel Panic while installing NetBSD 5.0.1 PVM DomU
Hi all!
I am running SXCE 121 on a dual quad-core X2200M2 (64 bit of course).
During an installation of a NetBSD 5.0.1 PVM domU, the entire machine
crashed with a kernel panic. Here''s what I managed to salvage from
the LOM console of the machine:
Sep 2 18:55:19 glaurung genunix: /xpvd/xdb@41,51712 (xdb5) offline
Sep 2 18:55:19 glaurung genunix: /xpvd/xdb@41,51728 (xdb6) offline
2009 Aug 14
16
What''s eating my disk space? Missing snapshots?
Please can someone take a look at the attached file which shows the output on my machine of
zfs list -r -t filesystem,snapshot -o space rpool/export/home/matt
The USEDDS figure of ~2GB is what I would expect, and is the same figure reported by the Disk Usage Analyzer. Where is the remaining 13.8GB USEDSNAP figure coming from? If I total up the list of zfs-auto snapshots it adds up to about 4.8GB,