similar to: exclude hell !!!!!

Displaying 20 results from an estimated 800 matches similar to: "exclude hell !!!!!"

2010 Mar 05
2
ZFS replication send/receive errors out
My full backup script errorred out the last two times I ran it. I''ve got a full Bash trace of it, so I know exactly what was done. There are a moderate number of snapshots on the zp1 pool, and I''m intending to replicate the whole thing into the backup pool. After housekeeping, I take make a current snapshot on the data pool (zp1). Since this is a new full backup, I then
2010 Feb 08
1
Big send/receive hangs on 2009.06
So, I was running my full backup last night, backing up my main data pool zp1, and it seems to have hung. Any suggestions for additional data gathering? -bash-3.2$ zpool status zp1 pool: zp1 state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using ''zpool
2009 Feb 22
11
Confused about zfs recv -d, apparently
First, it fails because the destination directory doesn''t exist. Then it fails because it DOES exist. I really expected one of those to work. So, what am I confused about now? (Running 2008.11) # zpool import -R /backups/bup-ruin bup-ruin # zfs send -R "zp1 at bup-20090222-054457UTC" | zfs receive -dv bup-ruin/fsfs/zp1" cannot receive: specified fs (bup-ruin/fsfs/zp1)
2003 Feb 28
1
Newbie Question
Hi, This is a very simple question I realise, but I hope maybe someone can just help me out. I am trying to do a very simple thing, just transfer a file from machine A to machine B using rsync with ssh. This is what i'm typing: bash-2.03# rsync -avvv --rsh="ssh -l tdf" tdf@machine.niss.ac.uk:/export/home/tdf/demofile.html demofile.html This is what I get: opening connection
2017 Sep 17
3
Confusing lstat() performance
On 17/09/17 18:03, Niklas Hamb?chen wrote: > So far the only difference between `ls` and `bup index` I could observe > is that `bup index` chdir()s into the directory to index, ls doesn't. > > But when I `cd` into the dir and run `ls` without directory argument, it > is still much faster than bup index for each stat(). Hmm, bup uses the fchdir() syscall to go into the target
2005 Dec 19
1
Upsmon problem
Hi, I feel that i'm having a small problem with my upsmon configuration (maybe even simply a permissions problem) since I upgrade today to 2.0.2 I have apcsmart set up and talk to my ups no problem (upsc get answer). I have upsd running no problem! But when I try to start upsmon, it's unable to talk to my upsd. Having try a few option I found that saying RUN_AS_USER=root makes my upsmon
2005 Jan 13
1
how to use solve.QP
At the risk of ridicule for my deficient linear algebra skills, I ask for help using the solve.QP function to do portfolio optimization. I am trying to following a textbook example and need help converting the problem into the format required by solve.QP. Below is my sample code if anyone is willing to go through it. This problem will not solve because it is not set up properly. I hope I
2017 Sep 17
0
Confusing lstat() performance
On 15/09/17 03:46, Niklas Hamb?chen wrote: >> Out of interest have you tried testing performance >> with performance.stat-prefetch enabled? I have now tested with `performance.stat-prefetch: on` but am not observing a difference. So far the only difference between `ls` and `bup index` I could observe is that `bup index` chdir()s into the directory to index, ls doesn't. But when
2017 Sep 14
5
Confusing lstat() performance
Hi, I have a gluster 3.10 volume with a dir with ~1 million small files in them, say mounted at /mnt/dir with FUSE, and I'm observing something weird: When I list and stat them all using rsync, then the lstat() calls that rsync does are incredibly fast (23 microseconds per call on average, definitely faster than a network roundtrip between my 3-machine bricks connected via Ethernet). But
2017 Sep 18
0
Confusing lstat() performance
I did a quick test on one of my lab clusters with no tuning except for quota being enabled: [root at dell-per730-03 ~]# gluster v info Volume Name: vmstore Type: Replicate Volume ID: 0d2e4c49-334b-47c9-8e72-86a4c040a7bd Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: 192.168.50.1:/rhgs/brick1/vmstore Brick2:
2017 Sep 17
0
Confusing lstat() performance
I found the reason now, at least for this set of lstat()s I was looking at. bup first does all getdents(), obtaining all file names in the directory, and then stat()s them. Apparently this destroys some of gluster's caching, making stat()s ~100x slower. What caching could this be, and how could I convince gluster to serve these stat()s as fast as if a getdents() had been done just before
2008 Mar 02
1
Wrong uptodate
Dear list, I am syncing files with rsync .... surprise surprise ... Rsync claims files to be uptudate, but they are not ... >From the log: export/opt/bup/status/1 is uptodate export/opt/bup/status/2 is uptodate . . Source directory (locally on server): -rw-r--r-- 1 root root 28 2008-03-01 22:44 1 -rw-r--r-- 1 root root 28 2008-03-01 22:37 2 . . Destination directory (nfs share):
2013 Sep 19
1
Index error copying compressed message
Hi. Dovecot 2.2, with the zlib plugin, I think we're getting bad index entries on IMAP COPY. On copying a message to an empty folder, in the dovecot error log I see: Sep 19 20:34:25 imap01 dovecot: imap(grain at rp-auth-test.com): Error: Cached message size smaller than expected (615 < 971) Sep 19 20:34:25 imap01 dovecot: imap(grain at rp-auth-test.com): Error: Corrupted index cache file
2003 Mar 13
1
BUG: read: Invalid argument
I'm attempting to mirror a directory tree from debian stable (rsync version 2.5.6cvs) to a win2k box (cygwin, rsync version 2.5.5). This setup/command had previously worked for a bit (cygwin at rsync version 2.4.6 AFAICR), but yesterday it hung, so, finding some mail-list entries about that, I upgraded cygwin, and now I get this: C:\>c:\cygwin\bin\rsync.exe -vvvvvv -essh -ac --delete
2017 Sep 15
2
Confusing lstat() performance
On 15/09/17 02:45, Sam McLeod wrote: > Out of interest have you tried testing performance > with performance.stat-prefetch enabled? Not yet, because I'm still struggling to understand the current more basic setup's performance behaviour (with it being off), but it's definitely on my list and I'll report the outcome.
2005 May 23
1
ZyXEL Prestige 2000W - cant make a call?
Hi All, Today I got a couple of ZyXEL Prestige 2000W WiFi phones. I'm having a problem making SIP calls although I can receive calls just fine. When I try to make a call the phone makes some sound (like "bup bup bup bup bup bup beep beep") and then I just hear hissing background noise (not too loud - like comfort noise). I upgraded to the latest firmware on the phone - Wj.00.10
2017 Sep 15
0
Confusing lstat() performance
Hi Niklas, Out of interest have you tried testing performance with performance.stat-prefetch enabled? -- Sam McLeod @s_mcleod https://smcleod.net > On 14 Sep 2017, at 10:42 pm, Niklas Hamb?chen <mail at nh2.me> wrote: > > Hi, > > I have a gluster 3.10 volume with a dir with ~1 million small files in > them, say mounted at /mnt/dir with FUSE, and I'm observing
2020 May 04
0
Understanding VDO vs ZFS
Hi David, in my opinion, VDO isn't worth the effort. I tried VDO for the same use case: backups. My dataset is 2-3TB and I backup daily. Even with a smaller dataset, VDO couldn't stand up to it's promises. It used tons of CPU and memory and with a lot of tuning I could get it to kind of work, but it became corrupted at the slightest problem (even a shutdown could do this, and
2003 Jan 04
4
filelist calculation algoritm
HI all, efficiency question for VERY low bandwith networks Suppose I know the list of files that are changed What is the most efficient way to make rsync sync this list. Currently I use --include-from --exclude to generate a 'filelist' but I suspect that client and/or server exchange the list of files in the module to be synced. this traffic can be avoided since the include-from
2020 May 03
9
Understanding VDO vs ZFS
Folks I'm looking for a solution for backups because ZFS has failed on me too many times. In my environment, I have a large amount of data (around 2tb) that I periodically back up. I keep the last 5 "snapshots". I use rsync so that when I overwrite the oldest backup, most of the data is already there and the backup completes quickly, because only a small number of files have