similar to: zfs send/receive performance concern

Displaying 20 results from an estimated 7000 matches similar to: "zfs send/receive performance concern"

2006 May 23
1
iostat numbers for ZFS disks, build 39
I updated an i386 system to b39 yesterday, and noticed this when running iostat: r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 0.0 0.5 0.0 10.0 0.0 0.0 0.0 0.5 0 0 c0t0d0 0.0 0.5 0.0 10.0 0.0 0.0 0.0 0.6 0 0 c0t1d0 0.0 65.1 0.0 119640001.5 0.0 0.0 0.0 0.3 0 2 c0t2d0 0.0 65.1 0.0 119640090.2 0.0
2010 Jan 08
0
ZFS partially hangs when removing an rpool mirrored disk while having some IO on another pool on another partition of the same disk
Hello, Sorry for the (very) long subject but I''ve pinpointed the problem to this exact situation. I know about the other threads related to hangs, but in my case there was no < zfs destroy > involved, nor any compression or deduplication. To make a long story short, when - a disk contains 2 partitions (p1=32GB, p2=1800 GB) and - p1 is used as part of a zfs mirror of rpool
2009 Nov 20
2
ZFS Send Priority and Performance
I have several X4540 Thor systems with one large zpool that replicate data to a backup host via zfs send/recv. The process works quite well when there is little to no usage on the source systems. However when the source systems are under usage replication slows down to a near crawl. Without load replication streams along usually near 1 Gbps but drops down to anywhere between 0 - 5000
2013 Feb 18
1
btrfs send & receive produces "Too many open files in system"
I believe what I am going to write is a bug report. When I finaly did # btrfs send -v /mnt/adama-docs/backups/20130101-192722 | btrfs receive /mnt/tmp/backups to migrate btrfs from one partition layout to another. After a while system keeps saying that "Too many open files in system" and denies access to almost every command line tool. When I had access to iostat I confirmed the
2008 Feb 05
31
ZFS Performance Issue
This may not be a ZFS issue, so please bear with me! I have 4 internal drives that I have striped/mirrored with ZFS and have an application server which is reading/writing to hundreds of thousands of files on it, thousands of files @ a time. If 1 client uses the app server, the transaction (reading/writing to ~80 files) takes about 200 ms. If I have about 80 clients attempting it @ once, it can
2010 Jan 12
3
set zfs:zfs_vdev_max_pending
We have a zpool made of 4 512g iscsi luns located on a network appliance. We are seeing poor read performance from the zfs pool. The release of solaris we are using is: Solaris 10 10/09 s10s_u8wos_08a SPARC The server itself is a T2000 I was wondering how we can tell if the zfs_vdev_max_pending setting is impeding read performance of the zfs pool? (The pool consists of lots of small files).
2007 Nov 29
10
ZFS write time performance question
HI, The question is a ZFS performance question in reguards to SAN traffic. We are trying to benchmark ZFS vx VxFS file systems and I get the following performance results. Test Setup: Solaris 10: 11/06 Dual port Qlogic HBA with SFCSM (for ZFS) and DMP (of VxFS) Sun Fire v490 server LSI Raid 3994 on backend ZFS Record Size: 128KB (default) VxFS Block Size: 8KB(default) The only thing
2006 Jul 01
1
The ZFS Read / Write roundabout
Hey all - Was playing a little with zfs today and noticed that when I was untarring a 2.5gb archive both from and onto the same spindle in my laptop, I noticed that the bytes red and written over time was seesawing between approximately 23MB/s and 0MB/s. It seemed like we read and read and read till we were all full up, then wrote until we were empty, and so the cycle went. Now: as it happens,
2010 Dec 09
3
ZFS send/receive while write is enabled on receive side?
Hi all, from much of the documentation I''ve seen, the advice is to set readonly=on on volumes on the receiving side during send/receive operations. Is this still a requirement? I''ve been trying the send/receive while NOT setting the receiver to readonly and haven''t seen any problems even though we''re traversing and ls''ing the dirs within the receiving
2007 Jul 14
3
zfs list hangs if zfs send is killed (leaving zfs receive process)
I was in the process of doing a large zfs send | zfs receive when I decided that I wanted to terminate the the zfs send process. I killed it, but the zfs receive doesn''t want to die... In the meantime my zfs list command just hangs. Here is the tail end of the truss output from a "truss zfs list": ioctl(3, ZFS_IOC_OBJSET_STATS, 0x08043484) = 0 ioctl(3,
2007 Apr 22
7
slow sync on zfs
Hello zfs-discuss, Relatively low traffic to the pool but sync takes too long to complete and other operations are also not that fast. Disks are on 3510 array. zil_disable=1. bash-3.00# ptime sync real 1:21.569 user 0.001 sys 0.027 During sync zpool iostat and vmstat look like: f3-1 504G 720G 370 859 995K 10.2M misc 20.6M 52.0G 0 0
2007 Nov 15
1
ZFS snapshot send/receive via intermediate device
Hey folks, I have no knowledge at all about how streams work in Solaris, so this might have a simple answer, or be completely impossible. Unfortunately I''m a windows admin so haven''t a clue which :) We''re looking at rolling out a couple of ZFS servers on our network, and instead of tapes we''re considering using off-site NAS boxes for backups. We think
2009 Dec 05
0
zfs send/receive with different on disk versions
Can you send a volume from a version 10 zpool to a version 14 zpool? I''ve been trying to send a zfs volume, and all it''s snapshots, to a new machine but the receiver eventually enters sleep and never finishes. -- This message posted from opensolaris.org
2009 Aug 23
3
zfs send/receive and compression
Is there a mechanism by which you can perform a zfs send | zfs receive and not have the data uncompressed and recompressed at the other end? I have a gzip-9 compressed filesystem that I want to backup to a remote system and would prefer not to have to recompress everything again at such great computation expense. If this doesn''t exist, how would one go about creating an RFE for
2011 Aug 08
2
rpool recover not using zfs send/receive
Is it possible to recover the rpool with only a tar/star archive of the root filesystem? I have used the zfs send/receive methods and that work without a problem. What I am trying to do is recreate the rpool and underlying zfs filesystems (rpool/ROOT, rpool/s10_uXXXXXX, rpool/dump, rpool/swap, rpool/export, and rpool/export/home). I then mount the pool at a alternate root and restore the tar
2010 Mar 23
0
zfs send/receive and file system properties
I am trying to coordinate properties and data between 2 file servers. on file server 1 I have: zfs get all zfs52/export/os/sles10sp2 NAME PROPERTY VALUE SOURCE zfs52/export/os/sles10sp2 type filesystem - zfs52/export/os/sles10sp2 creation Mon Mar 22 15:28 2010
2009 Feb 03
3
Time taken Backup using ZFS Send Receive
I have been using ZFS send and receive for a while and I noticed that when I try to do a send on a zfs file system of about 3 gig plus it took only about 3 minutes max. zfs send application/sample at back > /backup/sample.zfs However when I tried to send a file system that''s about 20 gig, it took almost an hour. I would had expected that since 3 gig took 3 mins, then 20 gig should
2006 Sep 07
5
Performance problem of ZFS ( Sol 10U2 )
Hi, I deployed ZFS on our mailserver recently, hoping for eternal peace after running on UFS and moving files witch each TB added. It is mailserver - it''s mdirs are on ZFS pool: capacity operations bandwidth pool used avail read write read write ------------------------- ----- ----- ----- ----- ----- -----
2010 Feb 08
1
Big send/receive hangs on 2009.06
So, I was running my full backup last night, backing up my main data pool zp1, and it seems to have hung. Any suggestions for additional data gathering? -bash-3.2$ zpool status zp1 pool: zp1 state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using ''zpool
2006 Dec 15
3
ZFS works in waves
A little back story: I have a Norco DS-1220, a 12 bay SATA box, it is connected to eSATA (SiI3124) via PCI-X two drives are straight connections, then the other two ports go to 5x multipliers within the box. My needs/hopes for this was using 12 500GB drives and ZFS make a very large & simple data dump spot on my network for other servers to rsync to daily & use zfs snapshots for