search for: servuhom

Displaying 17 results from an estimated 17 matches for "servuhom".

Did you mean: servuhome
2009 Jan 07
9
''zfs recv'' is very slow
On Wed 07/01/09 20:31 , Carsten Aulbert carsten.aulbert at aei.mpg.de sent: > Brent Jones wrote: > > > > Using mbuffer can speed it up dramatically, but > > this seems like a hack> without addressing a real problem with zfs > > send/recv.> Trying to send any meaningful sized snapshots > > from say an X4540 takes> up to 24 hours, for as little as 300GB
2009 Mar 28
3
zfs scheduled replication script?
I have a backup system using zfs send/receive (I know there are pros and cons to that, but it''s suitable for what I need). What I have now is a script which runs daily, do zfs send, compress and write it to a file, then transfer it with ftp to a remote host. It does full backup every 1st, and do incremental (with 1st as reference) after that. It works, but not quite resource-effective
2009 Jun 28
2
[storage-discuss] ZFS snapshot send/recv "hangs" X4540 servers
On Fri, Jun 26, 2009 at 10:14 AM, Brent Jones<brent at servuhome.net> wrote: > On Thu, Jun 25, 2009 at 12:00 AM, James Lever<j at jamver.id.au> wrote: >> >> On 25/06/2009, at 4:38 PM, John Ryan wrote: >> >>> Can I ask the same question - does anyone know when the 113 build will >>> show up on pkg.opensolaris.org/d...
2009 Dec 27
7
[osol-help] zfs destroy stalls, need to hard reboot
...39;'ll try to escalate this with my Sun support contract, but Sun support still isn''t very familiar/clued in about OpenSolaris, so I doubt I will get very far. Cross posting to ZFS-discuss also, as other may have seen this and know of a solution/workaround. -- Brent Jones brent at servuhome.net
2010 Feb 02
7
Help needed with zfs send/receive
Hi folks, I''m having (as the title suggests) a problem with zfs send/receive. Command line is like this : pfexec zfs send -Rp tank/tsm@snapshot | ssh remotehost pfexec zfs recv -v -F -d tank This works like a charm as long as the snapshot is small enough. When it gets too big (meaning somewhere between 17G and 900G), I get ssh errors (can''t read from remote host). I tried
2009 Dec 31
6
zvol (slow) vs file (fast) performance snv_130
Hello, I was doing performance testing, validating zvol performance in particularly, and found that zvol write performance to be slow ~35-44MB/s at 1MB blocksize writes. I then tested the underlying zfs file system with the same test and got 121MB/s. Is there any way to fix this? I really would like to have compatible performance between the zfs filesystem and the zfs zvols. # first test is a
2009 Feb 04
8
Data loss bug - sidelined??
In August last year I posted this bug, a brief summary of which would be that ZFS still accepts writes to a faulted pool, causing data loss, and potentially silent data loss: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6735932 There have been no updates to the bug since September, and nobody seems to be assigned to it. Can somebody let me know what''s happening with this
2010 May 20
13
send/recv over ssh
I know i''m probably doing something REALLY stupid.....but for some reason i can''t get send/recv to work over ssh. I just built a new media server and i''d like to move a few filesystem from my old server to my new server but for some reason i keep getting strange errors... At first i''d see something like this: pfexec: can''t get real path of
2010 Jul 19
22
zfs send to remote any ideas for a faster way than ssh?
I''ve tried ssh blowfish and scp arcfour. both are CPU limited long before the 10g link is. I''vw also tried mbuffer, but I get broken pipe errors part way through the transfer. I''m open to ideas for faster ways to to either zfs send directly or through a compressed file of the zfs send output. For the moment I; zfs send > pigz scp arcfour the file gz file to the
2008 Jul 31
17
Can I trust ZFS?
Hey folks, I guess this is an odd question to be asking here, but I could do with some feedback from anybody who''s actually using ZFS in anger. I''m about to go live with ZFS in our company on a new fileserver, but I have some real concerns about whether I can really trust ZFS to keep my data alive if things go wrong. This is a big step for us, we''re a 100% windows
2009 Jan 07
2
ZFS + OpenSolaris for home NAS?
On Wed, January 7, 2009 04:29, Peter Korn wrote: > Decision #4: file system layout > I''d like to have ZFS root mirrored. Do we simply use a portion of the existing disks for this, or add two disks just for root? Use USB-2 flash as those 2 disks? And where does swap go? The default install in Osol 0811 (which is what I just upgraded my home NAS to) gives you a zfs root pool that
2008 Aug 21
3
ZFS handling of many files
...could run into severe performance problems. Has anyone seen how ZFS behaves under such file counts? Currently NTFS handles it reasonably well (Explorer doesn''t like large directories, but our applications bypass that). Any feedback would be appreciated! Regards, -- Brent Jones brent at servuhome.net -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080821/f875da66/attachment.html>
2009 Aug 25
41
snv_110 -> snv_121 produces checksum errors on Raid-Z pool
I have a 5-500GB disk Raid-Z pool that has been producing checksum errors right after upgrading SXCE to build 121. They seem to be randomly occurring on all 5 disks, so it doesn''t look like a disk failure situation. Repeatingly running a scrub on the pools randomly repairs between 20 and a few hundred checksum errors. Since I hadn''t physically touched the machine, it seems a
2008 Nov 06
45
''zfs recv'' is very slow
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 hi, i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6). i''m using ''zfs send -i'' to replicate changes on A to B. however, the ''zfs recv'' on B is running extremely slowly. if i run the zfs send on A and redirect output to a file, it sends at 2MB/sec. but when i use ''zfs send
2009 Feb 11
8
Write caches on X4540
We''re using some X4540s, with OpenSolaris 2008.11. According to my testing, to optimize our systems for our specific workload, I''ve determined that we get the best performance with the write cache disabled on every disk, and with zfs:zfs_nocacheflush=1 set in /etc/system. The only issue is setting the write cache permanently, or at least quickly. Right now, as it is,
2008 Dec 28
2
zfs mount hangs
Hi, System: Netra 1405, 4x450Mhz, 4GB RAM and 2x146GB (root pool) and 2x146GB (space pool). snv_98. After a panic the system hangs on boot and manual attempts to mount (at least) one dataset in single user mode, hangs. The Panic: Dec 27 04:42:11 base ^Mpanic[cpu0]/thread=300021c1a20: Dec 27 04:42:11 base unix: [ID 521688 kern.notice] [AFT1] errID 0x00167f73.1c737868 UE Error(s) Dec 27
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1. I want to add "phase 2" which is another 7x1.5tb raidz1 Can I add the second phase to the first phase and basically have two raid5''s striped (in raid terms?) Yes, I probably should upgrade the zpool format too. Currently running snv_104. Also should upgrade to 110. If that is possible, would anyone happen to have the simple command lines to