similar to: zfs scheduled replication script?

Displaying 20 results from an estimated 7000 matches similar to: "zfs scheduled replication script?"

2009 Jan 07
9
''zfs recv'' is very slow
On Wed 07/01/09 20:31 , Carsten Aulbert carsten.aulbert at aei.mpg.de sent: > Brent Jones wrote: > > > > Using mbuffer can speed it up dramatically, but > > this seems like a hack> without addressing a real problem with zfs > > send/recv.> Trying to send any meaningful sized snapshots > > from say an X4540 takes> up to 24 hours, for as little as 300GB
2009 Dec 27
7
[osol-help] zfs destroy stalls, need to hard reboot
On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach <stephan.budach at jvm.de> wrote: > Brent, > > I had known about that bug a couple of weeks ago, but that bug has been files against v111 and we''re at v130. I have also seached the ZFS part of this forum and really couldn''t find much about this issue. > > The other issue I noticed is that, as opposed to the
2010 Feb 02
7
Help needed with zfs send/receive
Hi folks, I''m having (as the title suggests) a problem with zfs send/receive. Command line is like this : pfexec zfs send -Rp tank/tsm@snapshot | ssh remotehost pfexec zfs recv -v -F -d tank This works like a charm as long as the snapshot is small enough. When it gets too big (meaning somewhere between 17G and 900G), I get ssh errors (can''t read from remote host). I tried
2010 Jul 19
22
zfs send to remote any ideas for a faster way than ssh?
I''ve tried ssh blowfish and scp arcfour. both are CPU limited long before the 10g link is. I''vw also tried mbuffer, but I get broken pipe errors part way through the transfer. I''m open to ideas for faster ways to to either zfs send directly or through a compressed file of the zfs send output. For the moment I; zfs send > pigz scp arcfour the file gz file to the
2004 Mar 12
2
mapping home dir
Hi I am running a RH9 box in a w2k domain. I have installed winbind on the RH9 box joined it to the domain successfully. Domain users can login with their accounts. The problem is when they login they get a message stating that their home dir doesnt excists. How can i map their home dir that is on a w2k member server and how can i create their home dir on the RH9 box when the domain users login?
2009 Dec 31
6
zvol (slow) vs file (fast) performance snv_130
Hello, I was doing performance testing, validating zvol performance in particularly, and found that zvol write performance to be slow ~35-44MB/s at 1MB blocksize writes. I then tested the underlying zfs file system with the same test and got 121MB/s. Is there any way to fix this? I really would like to have compatible performance between the zfs filesystem and the zfs zvols. # first test is a
2010 May 20
13
send/recv over ssh
I know i''m probably doing something REALLY stupid.....but for some reason i can''t get send/recv to work over ssh. I just built a new media server and i''d like to move a few filesystem from my old server to my new server but for some reason i keep getting strange errors... At first i''d see something like this: pfexec: can''t get real path of
2009 Feb 04
8
Data loss bug - sidelined??
In August last year I posted this bug, a brief summary of which would be that ZFS still accepts writes to a faulted pool, causing data loss, and potentially silent data loss: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6735932 There have been no updates to the bug since September, and nobody seems to be assigned to it. Can somebody let me know what''s happening with this
2008 Jul 31
17
Can I trust ZFS?
Hey folks, I guess this is an odd question to be asking here, but I could do with some feedback from anybody who''s actually using ZFS in anger. I''m about to go live with ZFS in our company on a new fileserver, but I have some real concerns about whether I can really trust ZFS to keep my data alive if things go wrong. This is a big step for us, we''re a 100% windows
2009 Jan 07
2
ZFS + OpenSolaris for home NAS?
On Wed, January 7, 2009 04:29, Peter Korn wrote: > Decision #4: file system layout > I''d like to have ZFS root mirrored. Do we simply use a portion of the existing disks for this, or add two disks just for root? Use USB-2 flash as those 2 disks? And where does swap go? The default install in Osol 0811 (which is what I just upgraded my home NAS to) gives you a zfs root pool that
2005 Jan 15
1
Seeking pointers to help
Noob alert.... I've been through most of the available documentation at a fairly high level - if I've missed the obvious, just point me in the right direction. What I'd like to build: Windows XP system (actually, I'd prever a *nix but must use Win). Backend data stored in MySQL db Front end user interface built using PHP/HTML running on Apache. User builds a select via the web
2002 Jan 08
1
Very large quantity of files
Hello, to explain: I have two machines running the same hard- and software. Each has two harddrives 80GB/40GB with 500 megs of RAM and a 650MHz PIII, running SuSE Linux 7.1 with Kernel 2.2.18. They are connected on a local 100Mbps Ethernet. The harddrives are pretty full (total ~94GB) with a very large quantity of small files. The initial copy has taken about 48 hours. - I didn't worry
2009 Aug 25
41
snv_110 -> snv_121 produces checksum errors on Raid-Z pool
I have a 5-500GB disk Raid-Z pool that has been producing checksum errors right after upgrading SXCE to build 121. They seem to be randomly occurring on all 5 disks, so it doesn''t look like a disk failure situation. Repeatingly running a scrub on the pools randomly repairs between 20 and a few hundred checksum errors. Since I hadn''t physically touched the machine, it seems a
2008 Nov 06
45
''zfs recv'' is very slow
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 hi, i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6). i''m using ''zfs send -i'' to replicate changes on A to B. however, the ''zfs recv'' on B is running extremely slowly. if i run the zfs send on A and redirect output to a file, it sends at 2MB/sec. but when i use ''zfs send
2009 Feb 11
8
Write caches on X4540
We''re using some X4540s, with OpenSolaris 2008.11. According to my testing, to optimize our systems for our specific workload, I''ve determined that we get the best performance with the write cache disabled on every disk, and with zfs:zfs_nocacheflush=1 set in /etc/system. The only issue is setting the write cache permanently, or at least quickly. Right now, as it is,
2009 Jun 04
15
Scheduled maintenance?
I''m running xen on SLES10SP2. It''s been much more stable than SP1, but I still occasionally have issues. For example, one of my servers (8 CPU/32GB RAM) has five sles pv domUs and five fully virtualized windows 2k3 domUs. It had been up for almost 60 days. Yesterday afternoon one of the sles domU''s stopped responding. I went to check on it and I couldn''t using
2008 Aug 21
3
ZFS handling of many files
Hello, I have been experimenting with ZFS on a test box, preparing to present it to management. One thing I cannot test right now is our real-world application load. We write to CIFS shares currently in small files. We write about 250,000 files a day, in various sizes (1KB to 500MB). Some directories get a lot of individual files (sometimes 50,000 or more) in a single directory. We spoke to a Sun
2019 Nov 11
2
Error: Corrupted index cache file and Error: Maildir filename has wrong S value
What version are you running? Aki On 11.11.2019 12.26, Brent Clark via dovecot wrote: > Good day Guys > > Just an update, my colleague and I came across this script. > > https://www.dovecot.org/tools/maildir-size-fix.pl > > We made a backup, ran it, but unfortunately the problem still persists. > > Regards > Brent > > On 2019/11/11 11:42, Brent Clark wrote:
2019 Nov 11
2
Error: Corrupted index cache file and Error: Maildir filename has wrong S value
Good day Guys I have been met with a very interesting set of error messages. Here is the snippet(s) https://pastebin.com/raw/nFf79Ebc (Sorry for all the redacted REMOVED_*.) Google is proving to be a bit challenging in helping and explaining. Could anyone please share how something like this happens, but more importantly how to recover from this. doveadm index -u <username> INBOX do
2019 Nov 11
1
Error: Corrupted index cache file and Error: Maildir filename has wrong S value
Good day Guys I forgot to add and mention a very important piece of the puzzle. We are making use of dovecots compression plugin. I.e. https://doc.dovecot.org/configuration_manual/zlib_plugin/#compression Regards Brent Clark On 2019/11/11 14:49, Brent Clark wrote: > Good day Aki > > Thanks ever so much for replying. > > Interesting that you ask the version of dovecot. Any