similar to: Confusion regarding ''zfs send''

Displaying 20 results from an estimated 5000 matches similar to: "Confusion regarding ''zfs send''"

2011 Apr 28
4
Finding where dedup''d files are
Is there an easy way to find out what datasets have dedup''d data in them. Even better would be to discover which files in a particular dataset are dedup''d. I ran # zdb -DDDD which gave output like: index 1055c9f21af63 refcnt 2 single DVA[0]=<0:1e274ec3000:2ac00:STD:1> [L0 deduplicated block] sha256 uncompressed LE contiguous unique unencrypted 1-copy size=20000L/20000P
2010 Sep 25
4
dedup testing?
Hi all Has anyone done any testing with dedup with OI? On opensolaris there is a nifty "feature" that allows the system to hang for hours or days if attempting to delete a dataset on a deduped pool. This is said to be fixed, but I haven''t seen that myself, so I''m just wondering... I''ll get a 10TB test box released for testing OI in a few weeks, but before
2010 May 20
13
send/recv over ssh
I know i''m probably doing something REALLY stupid.....but for some reason i can''t get send/recv to work over ssh. I just built a new media server and i''d like to move a few filesystem from my old server to my new server but for some reason i keep getting strange errors... At first i''d see something like this: pfexec: can''t get real path of
2008 Feb 17
12
can''t share a zfs
-bash-3.2$ zfs share tank cannot share ''tank'': share(1M) failed -bash-3.2$ how do i figure out what''s wrong? This message posted from opensolaris.org
2008 Jul 12
2
sharenfs=off, but still being shared?
I noticed an oddity on my 2008.05 box today. Created a new zfs file system that I was planning to nfs share out to an old FreeBSD box, after I put sharenfs=on for it, I noticed there was a bunch of others shared too: -bash-3.2# dfshares -F nfs RESOURCE SERVER ACCESS TRANSPORT reaver:/store/movies reaver - - reaver:/export
2010 Mar 18
2
lazy zfs destroy
OK I have a very large zfs snapshot I want to destroy. When I do this, the system nearly freezes during the zfs destroy. This is a Sun Fire X4600 with 128GB of memory. Now this may be more of a function of the IO device, but let''s say I don''t care that this zfs destroy finishes quickly. I actually don''t care, as long as it finishes before I run out of disk space. So a
2011 Jun 30
1
cross platform (freebsd) zfs pool replication
Hi, I have two servers running: freebsd with a zpool v28 and a nexenta (opensolaris b134) running zpool v26. Replication (with zfs send/receive) from the nexenta box to the freebsd works fine, but I have a problem accessing my replicated volume. When I''m typing and autocomplete with tab key the command cd /remotepool/us (for /remotepool/users) I get a panic. check the panic @
2008 Jul 10
49
Supermicro AOC-USAS-L8i
On Wed, Jul 9, 2008 at 1:12 PM, Tim <tim at tcsac.net> wrote: > Perfect. Which means good ol'' supermicro would come through :) WOHOO! > > AOC-USAS-L8i > > http://www.supermicro.com/products/accessories/addon/AOC-USAS-L8i.cfm Is this card new? I''m not finding it at the usual places like Newegg, etc. It looks like the LSI SAS3081E-R, but probably at 1/2 the
2012 Mar 05
10
Compatibility of Hitachi Deskstar 7K3000 HDS723030ALA640 with ZFS
Greetings, Quick question: I am about to acquire some disks for use with ZFS (currently using zfs-fuse v0.7.0). I''m aware of some 4k alignment issues with Western Digital advanced format disks. As far as I can tell, the Hitachi Deskstar 7K3000 (HDS723030ALA640) uses 512B sectors and so I presume does not suffer from such issues (because it doesn''t lie about the physical layout
2011 Apr 05
11
ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?
Hello, I''m debating an OS change and also thinking about my options for data migration to my next server, whether it is on new or the same hardware. Migrating to a new machine I understand is a simple matter of ZFS send/receive, but reformatting the existing drives to host my existing data is an area I''d like to learn a little more about. In the past I''ve asked about
2009 Jan 21
8
cifs perfomance
Hello! I''am setup zfs / cifs home storage server, end now have low performance with play movie stored on this zfs from windows client. server hardware is not new , but n windows it perfomance was normal. CPU is AMD Athlon Burton Thunderbird 2500, runing on 1,7GHz, 1024 RAM and storage: usb c4t0d0 ST332062-0A-3.AA-298.09GB /pci at 0,0/pci1458,5004 at 2,2/cdrom at 1/disk at
2010 Jan 17
3
I can''t seem to get the pool to export...
root at nas:~# zpool export -f raid cannot export ''raid'': pool is busy I''ve disabled all the services I could think of. I don''t see anything accessing it. I also don''t see any of the filesystems mounted with mount or "zfs mount". What''s the deal? This is not the rpool, so I''m not booted off it or anything like that.
2010 Jul 09
2
snapshot out of space
I am getting the following erorr message when trying to do a zfs snapshot: root at pluto#zfs snapshot datapool/mars at backup1 cannot create snapshot ''datapool/mars at backup1'': out of space root at pluto#zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT datapool 556G 110G 446G 19% ONLINE - rpool 278G 12.5G 265G 4% ONLINE - Any ideas??? -------------- next part
2011 May 17
3
Reboots when importing old rpool
I have a fresh install of Solaris 11 Express on a new SSD. I have inserted the old hard disk, and tried to import it, with: # zpool import -f <long id number> Old_rpool but the computer reboots. Why is that? On my old hard disk, I have 10-20 BE, starting with OpenSolaris 2009.06 and upgraded to b134 up to snv_151a. I also have a WinXP entry in GRUB. This hard disk is partitioned, with a
2011 Aug 11
6
unable to mount zfs file system..pl help
# uname -a Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux # rpm -qa|grep zfs zfs-test-0.5.2-1 zfs-modules-0.5.2-1_2.6.18_194.el5 zfs-0.5.2-1 zfs-modules-devel-0.5.2-1_2.6.18_194.el5 zfs-devel-0.5.2-1 # zfs list NAME USED AVAIL REFER MOUNTPOINT pool1 120K 228G 21K /pool1 pool1/fs1 21K 228G 21K /vik [root at
2008 Mar 13
12
7-disk raidz achieves 430 MB/s reads and 220 MB/s writes on a $1320 box
I figured the following ZFS ''success story'' may interest some readers here. I was interested to see how much sequential read/write performance it would be possible to obtain from ZFS running on commodity hardware with modern features such as PCI-E busses, SATA disks, well-designed SATA controllers (AHCI, SiI3132/SiI3124). So I made this experiment of building a fileserver by
2009 Jan 16
4
Verbose Information from "zfs send -v <snapshot>"
What ''verbose information'' is reported by the "zfs send -v <snapshot>" contain? Also on Solaris 10u6 I don''t get any output at all - is this a bug? Regards, Nick -- This message posted from opensolaris.org
2010 Mar 04
8
Huge difference in reporting disk usage via du and zfs list. Fragmentation?
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10? What except zfs send/receive can be done to free the fragmented space? One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du. The other ZFS was used for similar
2011 Jan 29
27
ZFS and TRIM
My google-fu is coming up short on this one... I didn''t see that it had been discussed in a while ... What is the status of ZFS support for TRIM? For the pool in general... and... Specifically for the slog and/or cache??? -------------- next part -------------- An HTML attachment was scrubbed... URL:
2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if it is vdev specific, or pool wide. Google didn''t seem to know. I''m considering a mixed pool with some "advanced format" (4KB sector) drives, and some normal 512B sector drives, and was wondering if the ashift can be set per vdev, or only per pool. Theoretically, this would save me some size on