Displaying 20 results from an estimated 6000 matches similar to: "lazy zfs destroy"
2009 Dec 10
6
Confusion regarding ''zfs send''
I''m playing around with snv_128 on one of my systems, and trying to
see what kinda of benefits enabling dedup will give me.
The standard practice for reprocessing data that''s already stored to
add compression and now dedup seems to be a send / receive pipe
similar to:
zfs send -R <old fs>@snap | zfs recv -d <new fs>
However, according to the man page,
2009 Dec 27
7
[osol-help] zfs destroy stalls, need to hard reboot
On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach <stephan.budach at jvm.de> wrote:
> Brent,
>
> I had known about that bug a couple of weeks ago, but that bug has been files against v111 and we''re at v130. I have also seached the ZFS part of this forum and really couldn''t find much about this issue.
>
> The other issue I noticed is that, as opposed to the
2007 Oct 31
1
Problem booting xen/xvm on Sun X4600 box
Hi,
I''m trying to boot NV76 and Xen/xVM on a Sun X4600 but are having trouble with the system crashing hard in the process. Booting NV76 on metal works fine.
It seems to be somewhere around when it would start X that the whole system suddenly resets and start BIOS init again. There''s no error messages in the syslog but I get the message "A Hyper Transport sync flood error
2011 Apr 28
4
Finding where dedup''d files are
Is there an easy way to find out what datasets have dedup''d data in
them. Even better would be to discover which files in a particular
dataset are dedup''d.
I ran
# zdb -DDDD
which gave output like:
index 1055c9f21af63 refcnt 2 single DVA[0]=<0:1e274ec3000:2ac00:STD:1>
[L0 deduplicated block] sha256 uncompressed LE contiguous unique
unencrypted 1-copy size=20000L/20000P
2010 Dec 09
3
ZFS Prefetch Tuning
Hi All,
Is there a way to tune the zfs prefetch on a per pool basis? I have a
customer that is seeing slow performance on a pool the contains multiple
tablespaces from an Oracle database, looking at the LUNs associated to
that pool they are constantly at 80% - 100% busy. Looking at the output
from arcstat for the miss % on data, prefetch and metadata we are
getting around 5 - 10 % on data,
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
Hi,
Created a zpool with 64k recordsize and enabled dedupe on it.
zpool create -O recordsize=64k TestPool device1
zfs set dedup=on TestPool
I copied files onto this pool over nfs from a windows client.
Here is the output of zpool list
Prompt:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
TestPool 696G 19.1G 677G 2% 1.13x ONLINE -
When I ran a
2010 Aug 18
10
Networker & Dedup @ ZFS
Hi,
We are considering using a ZFS based storage as a staging disk for Networker. We''re aiming at
providing enough storage to be able to keep 3 months worth of backups on disk, before it''s moved
to tape.
To provide storage for 3 months of backups, we want to utilize the dedup functionality in ZFS.
I''ve searched around for these topics and found no success stories,
2006 Dec 08
22
ZFS Usage in Warehousing (lengthy intro)
Dear all,
we''re currently looking forward to restructure our hardware environment for
our datawarehousing product/suite/solution/whatever.
We''re currently running the database side on various SF V440''s attached via
dual FC to our SAN backend (EMC DMX3) with UFS. The storage system is
(obviously in a SAN) shared between many systems. Performance is mediocre
in terms
2008 Nov 05
2
plockstat: processing aborted: Abort due to systemic unresponsiveness
Hello,
I need help here about plockstat on X86 platform (Sun X4600 AMD)
# plockstat -A -p 20034
plockstat: processing aborted: Abort due to systemic unresponsiveness
# plockstat -e 5 -s 10 -A -x bufsize=100k -x aggsize=20m -p 20034
plockstat: processing aborted: Abort due to systemic unresponsiveness
# ps -ef | grep 20034
algodev 20034 1 2 07:00:54 ? 86:17
2011 Jan 28
8
ZFS Dedup question
I created a zfs pool with dedup with the following settings:
zpool create data c8t1d0
zfs create data/shared
zfs set dedup=on data/shared
The thing I was wondering about was it seems like ZFS only dedup at the file level and not the block. When I make multiple copies of a file to the store I see an increase in the deup ratio, but when I copy similar files the ratio stays at 1.00x.
--
This
2008 May 30
1
Doubt about NIC on domU
Hello Friends
I''d like to ask about how many interfaces can I configure on my domU.
I''ve one x4600 (Sun Fire) with 32GB RAM and 16 CPU''s AMD Opteron, with
RedHat AS 4up5, and I created two guest machines with 4 CPU''s each, 2 NICs
and 2GB RAM. So, I needed set 3 IP address then I set one IP in eth0,
another in eth0:1 (alias) and another in eth1. On first
2011 Apr 05
11
ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?
Hello,
I''m debating an OS change and also thinking about my options for data
migration to my next server, whether it is on new or the same hardware.
Migrating to a new machine I understand is a simple matter of ZFS
send/receive, but reformatting the existing drives to host my existing
data is an area I''d like to learn a little more about. In the past I''ve
asked about
2012 Mar 05
10
Compatibility of Hitachi Deskstar 7K3000 HDS723030ALA640 with ZFS
Greetings,
Quick question:
I am about to acquire some disks for use with ZFS (currently using zfs-fuse
v0.7.0). I''m aware of some 4k alignment issues with Western Digital
advanced format disks.
As far as I can tell, the Hitachi Deskstar 7K3000 (HDS723030ALA640) uses
512B sectors and so I presume does not suffer from such issues (because it
doesn''t lie about the physical layout
2010 Jan 27
13
zfs destroy hangs machine if snapshot exists- workaround found
Hi,
I was suffering for weeks from the following problem:
a zfs dataset contained an automatic snapshot (monthly) that used 2.8 TB of data. The dataset was deprecated, so I chose to destroy it after I had deleted some files; eventually it was completely blank besides the snapshot that still locked 2.8 TB on the pool.
''zfs destroy -r pool/dataset''
hung the machine within seconds
2007 Mar 25
2
Installing R on a machine with 64-bit Opteron processors
I have been tasked with installing statistical and other data
analysis applications on a new Sun Fire X4600 M2 x64 server that came
equipped with eight AMD dual core Opteronn 64-bit processors. It is
running the 64-bit version of Suse Linux 9.
I have read through the installation docs, and I guess I don't
understand what to do, or even how to identify which version, if any,
of this
2007 Jul 06
2
HVM Linux just installed but does not do its first boot
Just successfully installed SLES9.3-32bit as file-backended HVM domU.
Everything went OK, the distributive partitioned its "harddisk" into root and swap OK, have its grub installed into sda boot sector, but I could not make to boot no one single first time after installation. Every domU boot gives the following screen for a short time and dies:
Booting from Hard Disk...
Boot from
2008 May 27
3
dom0 memory limits greater than 2Gb?
The main Xen mailing list ("xen-users") generally advises
users to limit the memory on dom0 to 2Gb or less. Apparently the
general version of Xen has troubles with this.
What''s the corresponding advice for the Xen in OpenSolaris,
Nevada b87 in particular? I''ve got an X4600 with 32Gb of physical
memory on it. I was originally planning to have dom0 be the general-
2010 Sep 25
4
dedup testing?
Hi all
Has anyone done any testing with dedup with OI? On opensolaris there is a nifty "feature" that allows the system to hang for hours or days if attempting to delete a dataset on a deduped pool. This is said to be fixed, but I haven''t seen that myself, so I''m just wondering...
I''ll get a 10TB test box released for testing OI in a few weeks, but before
2010 Sep 30
3
Cannot destroy snapshots: dataset does not exist
Hello,
I have a ZFS filesystem (zpool version 26 on Nexenta CP 3.01) which I''d like to rollback but it''s having an existential crisis.
Here''s what I see:
root at bambi:/# zfs rollback bambi/faline/userdocs at AutoD-2010-09-28
cannot rollback to ''bambi/faline/userdocs at AutoD-2010-09-28'': more recent snapshots exist
use ''-r'' to
2009 Nov 24
9
Best practices for zpools on zfs
Suppose I have a storage server that runs ZFS, presumably providing
file (NFS) and/or block (iSCSI, FC) services to other machines that
are running Solaris. Some of the use will be for LDoms and zones[1],
which would create zpools on top of zfs (fs or zvol). I have concerns
about variable block sizes and the implications for performance.
1.