Displaying 20 results from an estimated 800 matches similar to: "Problem related to time-stamp"
2003 Jul 17
1
2 GB Limit when writing to smbfs filesystems
I'm running RedHat 8.0 with samba-2.2.7-5.8.0 (installed from RedHat
distribution)
When I use cpio to write a backup (> 2GB) to a smbfs filesystem, I get the
error: File size limit exceeded
I get the same error when I linux copy (cp) a file (> 2GB) from a Linux ext3
filesystem to the smbfs filesystem.
The smbfs filesystem is mounted from a Windows 2000 Professional
workstation.
After
2009 Jan 28
2
ZFS+NFS+refquota: full filesystems still return EDQUOT for unlink()
We have been using ZFS for user home directories for a good while now.
When we discovered the problem with full filesystems not allowing
deletes over NFS, we became very anxious to fix this; our users fill
their quotas on a fairly regular basis, so it''s important that they
have a simple recourse to fix this (e.g., rm). I played around with
this on my OpenSolaris box at home, read around
2007 Nov 27
1
Syncing to multiple servers
Helle everyone,
Let's say we have 3 servers, 2 of them have the latest (stable) version
of rsyncd running (2.6.9)
<Server1> ==> I N T E R N E T ==> <Server2 (rsyncd running)> ==> LAN
==> <Server3 (rsyncd running)>
Suppose I want to send a big file (bigfile.big) from Server1 to both
Server2 and Server3. It would be a good idea to send first from Server1
2009 Feb 19
4
[Bug 1558] New: Sftp client does not correctly process server response messages after write error
https://bugzilla.mindrot.org/show_bug.cgi?id=1558
Summary: Sftp client does not correctly process server response
messages after write error
Product: Portable OpenSSH
Version: 4.3p2
Platform: amd64
OS/Version: Linux
Status: NEW
Severity: normal
Priority: P2
Component: sftp
2007 Mar 02
1
--delete --force Won't Remove Directories With Dotnames
--delete --force Won't Remove Directories With Dotnames
rsync 2.6.9
Me, personally, I reckon this to be an irritant ... but perhaps (and having
thought about this a bit I decided it's a good chance) this is an
intentional and useful behaviour. But it's a nuisance if you call
your --partial-dir .partial, as I happen to do, since now if you remove a
directory which was aborted in
2012 Oct 20
2
can't find the error in if function... maybe i'm blind?
Hi everybody,
the following alway gives me the error
"Fehler in if (File$X.Frame.Number[a] + 1 == File$X.Frame.Number[a + 1])
(File$FishNr[a] <- File$FishNr[a - : Fehlender Wert, wo TRUE/FALSE n?tig
ist". Maybe its stupid, but i'm not getting why... Maybe someone can help
me. Thanks a lot!
for (i in unique(BigFile$TrackAll))
{ File <-
2009 Apr 22
2
purge-empty-dirs and max-file-size confusion
I want to use --min-size to copy just large files (and their necessary
parent directories), but everything I've tried copies *all* the source
directories, and creates them empty on the destination even if they
don't have any big files in them. I only want the minimal directory
hierarchies that contain the big files. This doesn't work:
$ rm -rf /tmp/foo
$ rsync -ai --min-size
2016 Oct 26
0
NFS help
On Tue, Oct 25, 2016 at 7:22 PM, Larry Martell <larry.martell at gmail.com> wrote:
> Again, no machine on the internal network that my 2 CentOS hosts are
> on are connected to the internet. I have no way to download anything.,
> There is an onerous and protracted process to get files into the
> internal network and I will see if I can get netperf in.
Right, but do you have
2013 Dec 19
1
[Bug 10336] New: Time in file listing with --list-only option is inconsistent whether dst is given or not
https://bugzilla.samba.org/show_bug.cgi?id=10336
Summary: Time in file listing with --list-only option is
inconsistent whether dst is given or not
Product: rsync
Version: 3.0.9
Platform: All
OS/Version: All
Status: NEW
Severity: normal
Priority: P5
Component: core
AssignedTo:
2011 Oct 07
5
[Bug 8512] New: rsync -a slower than cp -a
https://bugzilla.samba.org/show_bug.cgi?id=8512
Summary: rsync -a slower than cp -a
Product: rsync
Version: 3.1.0
Platform: All
OS/Version: All
Status: NEW
Severity: normal
Priority: P5
Component: core
AssignedTo: wayned at samba.org
ReportedBy: linux.news at bucksch.org
QAContact:
2002 Sep 27
0
Server writes...?
I have a quick question about rsync's writing of files.
I have a team of people that all use the host BigServer, which is
running rsync as a deamon, as a central place to keep all shared files
backed up. The "master copy" for any given file is considered to be the
local file that somebody has worked on -- i.e., BigServer is NOT
considered the master copy. BigServer is the backup
2016 Oct 27
0
NFS help
On Thu, Oct 27, 2016 at 1:03 AM, Larry Martell <larry.martell at gmail.com> wrote:
> On Wed, Oct 26, 2016 at 9:35 AM, Matt Garman <matthew.garman at gmail.com> wrote:
>> On Tue, Oct 25, 2016 at 7:22 PM, Larry Martell <larry.martell at gmail.com> wrote:
>>> Again, no machine on the internal network that my 2 CentOS hosts are
>>> on are connected to the
2005 Jul 06
2
OpenSSH and non-blocking mode
Dear OpenSSH developers,
OpenSSH setting non-blocking mode on its standard files creates serious
problems.
Setting non-blocking mode violates many of the semantics of how files
are supposed to behave and most programs (and most, if not all, stdio
libraries) are not prepared to deal with it. That wouldn't be a problem
except that non-blocking mode is not a property of the file descriptor
but
2005 Feb 16
0
mke2fs options for very large filesystems (and corruption!)
[sorry if this isn't threaded right... I just subscribed]
Theodore Ts'o wrote:
>
> There are two reasons for the reserve. One is to reserve space on the
> partition containing /var and /etc for log files, etc. The other is
> to avoid the performance degredation when the last 5-10% of the disk
> space is used. (BSD actually reserves 10% by default.) Given that
> the
2002 Mar 27
2
Linux 2.4.18 on RH 7.2 - odd failures
Hi there,
I'm using RH7.2 (with the 2.4.9-30 kernal and it's required components)
as a base for a server system running kernel 2.4.18. I've gone to this
version to get around non-performing aic7xxx drivers in the stock 7.2
kernels, and updated gigabit ethernet drivers.
I have a raid unit (Medea) attached to an Adaptec 3916, coming up as
sdb. It has 2kb blocks, but the fault
2006 Apr 28
1
*Bug*
My application uses Ferret to search through the database records on the
system.
The problem is that there is some bug in my application which I have not
been able to fix. What it actually does is that it returns the records
on its own sweet will.
I mean, it gives me results for some records and for some it doesn''t.
I have done things like dropped the database and created it again,
2016 Oct 27
2
NFS help
On Wed, Oct 26, 2016 at 9:35 AM, Matt Garman <matthew.garman at gmail.com> wrote:
> On Tue, Oct 25, 2016 at 7:22 PM, Larry Martell <larry.martell at gmail.com> wrote:
>> Again, no machine on the internal network that my 2 CentOS hosts are
>> on are connected to the internet. I have no way to download anything.,
>> There is an onerous and protracted process to get
2015 Sep 11
0
Cannot open: No space left on device
did you (or someone else with root access) possibly delete a very large
file in /var that may still have been in use? it's very annoying but
if you do a rm on a large file under /var that is still open by some
process for writing, it won't actually clear the space. you can
overcome that by just truncating the file instead of doing an rm (e.g.
either > /var/log/bigfile or cp
2005 Sep 23
2
17G File size limit?
Hi everyone,
This is a strange problem I have been having. I'm not sure where the
problem is, so I figured I'd start here.
I as having problems with Bacula stopping on 17Gig Volume sizes, so I
decided to try to Just dd a 50 gig file. Sure enough, once the file hit
17 gigs dd stopped and spit out an error
(pandora bacula)# dd if=/dev/zero of=bigfile bs=1M count=50000
File size
2004 Sep 02
1
--partiall-dir not behaving like it ought too
Hi,
I have awaited the new release inorder to use the -"-partial-dir" option.
But after testing it seems that it does not behave like it says on the tin.
It will correctly move and rename the interrupted file to the declared
directory, but it will not
attempt to use it when the client attempts to rsync the file again.
I have a Solaris 8 box running as a server (Matthew), and another