Displaying 20 results from an estimated 100 matches similar to: "Rsync with NAS & Storagecraft ShadowProtect Desktop"
2008 Oct 07
1
ShadowProtect --> XEN
> We are relatively new to Opensolaris, but are trying to create a
> lightweight platform to restore Windows servers into in case of a
> disaster or server failure. We are using ShadowProtect to make backup
> images of the windows servers. Our question is: can we somehow restore
> those images (or convert them) so we could run the Windows server in a
> virtualized environment on
2005 May 18
1
large file sizes
I recently upgraded Samba 2.2.2 to 3.0.13. It works great except I noticed
my file sizes are huge. For example, nmbd is 47megs and smbd is 93megs.
The old version was a small fraction of that. Does these file sizes seem
correct?
2011 Jan 24
8
Unable to run Dungeons And Dragons Online.
Wine version: 1.3.11
Linux Distro: Ubuntu 10.04 Lucid
I am trying to run Dungeons and Dragons Online (DDO) but have been unable to.
I followed the instructions from the AppDB page (http://appdb.winehq.org/objectManager.php?sClass=version&iId=22586)
I ran the installer on a windows computer then copied the downloaded file to my computer and installed it with wine. I downloaded the
2006 Jun 26
3
no true incrementals with rsync?
for example's sake:
With traditional backup systems, you keep a base (full backup, let's say
every 30 days), then build incrementals on top of that, eg. (what has
changed since the base).
So, to restore, you copy over your base, then copy each incremental over the
base to rebuild up to the latest snapshot. (*copying new incrementals files
over older base files*)
With rsync, (using
2015 Nov 09
3
Rsync and differential Backups
On 11/9/2015 9:50 AM, Gordon Messmer wrote:
>
> I don't see the distinction you're making.
a incremental backup copies everything since the last incremental
a differential copies everything since the last full.
rsync is NOT a backup system, its just a incremental file copy
with the full/incremental/differential approach, a restore to a given
date would need to restore the last
2015 May 06
0
Backup PC or other solution
On 5/6/2015 1:34 PM, Valeri Galtsev wrote:
> My assistant liked backuppc. It is OK and will do decent job for really
> small number of machines (thinking 3-4 IMHO). I run bacula which has close
> to a hundred of clients; all is stored in files on RAID units, no tapes.
> Once you configure it it is nice. But to make a configuration work for the
> first time is really challenging
2015 Oct 13
0
transferring large encrypted images.
Why are you encrypting the files and not the filesystem and the channel?
On Tue, Oct 13, 2015 at 6:54 PM, Xen <list at xenhideout.nl> wrote:
> Hi Folks,
>
> I was wondering if I could ask this question here.
>
> Initially when I was thinking up how to do this I was expecting block
> encryption to stay consistent from one 'encryption run' to the next, but I
>
2015 May 07
2
Backup PC or other solution
Il 07/05/2015 00:47, John R Pierce ha scritto:
> On 5/6/2015 1:34 PM, Valeri Galtsev wrote:
>> My assistant liked backuppc. It is OK and will do decent job for really
>> small number of machines (thinking 3-4 IMHO). I run bacula which has
>> close
>> to a hundred of clients; all is stored in files on RAID units, no tapes.
>> Once you configure it it is nice. But to
2015 Oct 13
2
transferring large encrypted images.
Hi Folks,
I was wondering if I could ask this question here.
Initially when I was thinking up how to do this I was expecting block
encryption to stay consistent from one 'encryption run' to the next, but I
found out later that most schemes randomize the result by injecting a
random block or seed at the beginning and basing all other encrypted data
on that.
In order to prevent
2007 Dec 31
1
Help with full and incremental dumps
I have an Overland Arcvault 12 library with a full LTO3 magazine of 400/800 GB
tapes. It is connected directly to the fileserver via a SCSI card/cable.
The two main directories I want to back up are /var/log, which is on one
filesystem, and /home, which is on another.
There are _currently_ no databases to worry about, but there may be active
users logged in and active jobs running.
2003 Mar 14
1
Updated ext3 patch set for 2.4
Hi all,
I've pushed my current set of ext3 diffs (against Marcelo's current
tree) to
http://people.redhat.com/sct/patches/ext3-2.4/dev-20030314/
This includes:
00-merged/ diffs recently merged into 2.4
10-core-fixes-other/ misc fixes/tweaks from akpm, adilger
11-core-fixes-sct/ misc fixes/tweaks from sct
20-tytso-updates/ Ted's recent updates
21-updates-sct/ recent sct diffs
2006 Jun 09
0
-b option on rsync with HFS support
OS X 10.4.6 (BSD)
I was using the 'standard' versions (i.e. non-HFS supporting) of
rsync with the backup -b option so that I could do nightly
incrementals. This seemed to be running fine. Changes to existing
backed up files got copied to the appropriate path(s) on the target.
BUDIR=myDirectoryName
time /usr/local/bin/rsync -a -b --suffix=# --backup-dir=$BUDIR /
Users
2013 Mar 26
0
Converting 2D matrix to 3D array
Hi,
I would like to create M paths of 2 correlated brownian motion incrementals
where each path is of length N. I use mvrnorm to create all the increments,
i.e.
h <- 1.0;
COV <- matrix(c(1,0,0,1),nrow=2);
dW <- h * t(mvrnorm(n=N*M,mu=c(0,0),Sigma=COV));
The next step is that I'd like to wrap dW (2D matrix of size 2x(NM) into a
3D array where each slice is 2xN matrix and
2009 Sep 28
1
rsync followup - what did I run?
Many people have wondered what my rsync syntax was -
[as root]: rsync -av /path/to/source me at remote-host:/path/to/dest
I'll be adjusting it to adapt to perform incrementals, probably with --update.
So, just need to learn why some of the .dotfiles, and other unknown
files (unless I ran a diff) didn't successfully copy over.
Thanks.
2015 Jan 29
0
network copy performance is poor (rsync) - debugging suggestions?
On Thu, Jan 29, 2015 at 7:05 AM, Joseph L. Brunner
<joe at affirmedsystems.com> wrote:
>>
> our investigation showed the rsync process even with all switches we found has to "open" the file a bit before it copies it... so rsync sucks for this kind of stuff with 2 MILLION small files - it never gets going moving millions of small files it has to keep reading. There a switch
2015 May 08
0
Backup PC or other solution
On 5/7/2015 11:44 PM, Sorin Srbu wrote:
> May I ask what your settings are to achieve that retention rate?
there's a lot of settings... but these are probably applicable...
Main Config:
Schedule:
FullPeriod: 27.9
FullKeepCnt: 24
FullKeepCntMin: 8
FullAgeMax: 360
IncrPeriod: 0.97
IncrKeepCnt: 30
IncrKeepMin: 1
IncrAgeMax: 30
IncrLevels: 1
2015 May 11
2
Backup PC or other solution
> -----Original Message-----
> From: centos-bounces at centos.org [mailto:centos-bounces at centos.org] On
> Behalf Of John R Pierce
> Sent: den 8 maj 2015 17:12
> To: centos at centos.org
> Subject: Re: [CentOS] Backup PC or other solution
>
> On 5/7/2015 11:44 PM, Sorin Srbu wrote:
> > May I ask what your settings are to achieve that retention rate?
>
>
2015 Nov 10
0
Rsync and differential Backups
On 2015-11-09, John R Pierce <pierce at hogranch.com> wrote:
>
> XFS handles this fine. I have a backuppc storage pool with backups of
> 27 servers going back a year... now, I just have 30 days of
> incrementals, and 12 months of fulls,
I'm sure you know this already, but for those who may not, be sure to
mount your XFS filesystem with the inode64 option. Otherwise XFS
2015 Nov 10
1
Rsync and differential Backups
On Mon, November 9, 2015 7:52 pm, Keith Keller wrote:
> On 2015-11-09, John R Pierce <pierce at hogranch.com> wrote:
>>
>> XFS handles this fine. I have a backuppc storage pool with backups of
>> 27 servers going back a year... now, I just have 30 days of
>> incrementals, and 12 months of fulls,
>
> I'm sure you know this already, but for those who may
2017 Sep 21
0
Migrating maildirs - Courier to Dovecot
On 22-09-2017 4:34, Stroller wrote:
[...]
>
> I think my main question is whether there's any reason I shouldn't
> just rsync the maildirs across from the old mail server to the new
> one?
>
> There aren't many clients using this server, so I don't care if
> clients have to redownload all their messages (in fact, I expect
> they'll probably end up