similar to: 2 GB Limit when writing to smbfs filesystems

Displaying 20 results from an estimated 300 matches similar to: "2 GB Limit when writing to smbfs filesystems"

2008 Nov 18
2
anyone familiar with this error?
[whit at linuxsvr R.packages]$ sudo R CMD INSTALL portfolio.construction * Installing to library '/usr/local/lib64/R/library' * Installing *source* package 'portfolio.construction' ... ** R ** preparing package for lazy loading Loading required package: fts Loading required package: quadprog Loading required package: Rexcelpoi terminate called after throwing an instance of
2008 Dec 09
1
any suggestions to deal with 'Argument list too long' for a R CMD check?
Since, gcc was using upwards of 2gb of ram to compile my package, I just split all the functions into individual files. I guess I'm too clever for myself, because now I get hit with the "Argument list too long" error. Is there a way to deal with this aside from writing my own configure script (which could possibly feed the gcc commands one by one). -Whit RHEL 5 [whit at
2009 Jan 28
2
ZFS+NFS+refquota: full filesystems still return EDQUOT for unlink()
We have been using ZFS for user home directories for a good while now. When we discovered the problem with full filesystems not allowing deletes over NFS, we became very anxious to fix this; our users fill their quotas on a fairly regular basis, so it''s important that they have a simple recourse to fix this (e.g., rm). I played around with this on my OpenSolaris box at home, read around
2007 Nov 27
1
Syncing to multiple servers
Helle everyone, Let's say we have 3 servers, 2 of them have the latest (stable) version of rsyncd running (2.6.9) <Server1> ==> I N T E R N E T ==> <Server2 (rsyncd running)> ==> LAN ==> <Server3 (rsyncd running)> Suppose I want to send a big file (bigfile.big) from Server1 to both Server2 and Server3. It would be a good idea to send first from Server1
2012 Oct 20
2
can't find the error in if function... maybe i'm blind?
Hi everybody, the following alway gives me the error "Fehler in if (File$X.Frame.Number[a] + 1 == File$X.Frame.Number[a + 1]) (File$FishNr[a] <- File$FishNr[a - : Fehlender Wert, wo TRUE/FALSE n?tig ist". Maybe its stupid, but i'm not getting why... Maybe someone can help me. Thanks a lot! for (i in unique(BigFile$TrackAll)) { File <-
2007 Mar 02
1
--delete --force Won't Remove Directories With Dotnames
--delete --force Won't Remove Directories With Dotnames rsync 2.6.9 Me, personally, I reckon this to be an irritant ... but perhaps (and having thought about this a bit I decided it's a good chance) this is an intentional and useful behaviour. But it's a nuisance if you call your --partial-dir .partial, as I happen to do, since now if you remove a directory which was aborted in
2009 Apr 22
2
purge-empty-dirs and max-file-size confusion
I want to use --min-size to copy just large files (and their necessary parent directories), but everything I've tried copies *all* the source directories, and creates them empty on the destination even if they don't have any big files in them. I only want the minimal directory hierarchies that contain the big files. This doesn't work: $ rm -rf /tmp/foo $ rsync -ai --min-size
2005 Jul 06
2
OpenSSH and non-blocking mode
Dear OpenSSH developers, OpenSSH setting non-blocking mode on its standard files creates serious problems. Setting non-blocking mode violates many of the semantics of how files are supposed to behave and most programs (and most, if not all, stdio libraries) are not prepared to deal with it. That wouldn't be a problem except that non-blocking mode is not a property of the file descriptor but
2002 Mar 27
2
Linux 2.4.18 on RH 7.2 - odd failures
Hi there, I'm using RH7.2 (with the 2.4.9-30 kernal and it's required components) as a base for a server system running kernel 2.4.18. I've gone to this version to get around non-performing aic7xxx drivers in the stock 7.2 kernels, and updated gigabit ethernet drivers. I have a raid unit (Medea) attached to an Adaptec 3916, coming up as sdb. It has 2kb blocks, but the fault
2005 Sep 23
2
17G File size limit?
Hi everyone, This is a strange problem I have been having. I'm not sure where the problem is, so I figured I'd start here. I as having problems with Bacula stopping on 17Gig Volume sizes, so I decided to try to Just dd a 50 gig file. Sure enough, once the file hit 17 gigs dd stopped and spit out an error (pandora bacula)# dd if=/dev/zero of=bigfile bs=1M count=50000 File size
2016 Oct 27
2
NFS help
On Wed, Oct 26, 2016 at 9:35 AM, Matt Garman <matthew.garman at gmail.com> wrote: > On Tue, Oct 25, 2016 at 7:22 PM, Larry Martell <larry.martell at gmail.com> wrote: >> Again, no machine on the internal network that my 2 CentOS hosts are >> on are connected to the internet. I have no way to download anything., >> There is an onerous and protracted process to get
2007 Nov 08
3
skip non-sequential lines using scan?
Hi all, Is there a way to skip non-sequential lines using the "skip" argument in the scan function? E.g., I have a matrix with 100 rows and 1e7 columns. I open a connection and want to read only lines 5, 7, 9, etc [i.e., seq(5,99,2)] It might seem that the syntax to do this would be something like this (if only the "skip" allowed vectors in the same way colClasses does in
2004 Sep 02
1
--partiall-dir not behaving like it ought too
Hi, I have awaited the new release inorder to use the -"-partial-dir" option. But after testing it seems that it does not behave like it says on the tin. It will correctly move and rename the interrupted file to the declared directory, but it will not attempt to use it when the client attempts to rsync the file again. I have a Solaris 8 box running as a server (Matthew), and another
2003 Mar 21
1
Download to ext3 partition stalls
Hi ext3-gurus, I was doing some testing of our 100 Mbit ethernet at work, and found a weird problem which appears to be ext3 releated as far as I can tell. The test is very simple. I use ncftp to connect to another machine running vsftpd and download a very large file over the network. The network is switched, and I get 8-10 MB/s transfers usually. Now, if I download to /dev/null ("get
2004 Jul 30
1
Problem related to time-stamp
Hi, I m facing problem in "rsync" rtelated to the time-stamp of the files. Im using rsync for transfering the file from my m/c (OS :jaluna-linux) to a remote m/c(OS:jaluna-linux) and even if there was no change in the files on my m/c, when i rsync them to remote m/c the time-stamp of the file on remote m/c (which i transfered from my m/c) will change. my file name is bigfile and it is
2003 Dec 02
1
rdiff
Is there any chance for rdiff ? I need to frequently synchronize big text file (60MB+) undertaking small changes and I am interested in differences between the subsequent versions [DNS RBL data in dnsbl format, 1E6+ lines of text, new version every 20m, on average 50 new entries (lines) in every synchronization] I would like to get (small) diff file as result of rsync session and apply it to
2016 Oct 26
3
NFS help
On Tue, Oct 25, 2016 at 12:48 PM, Matt Garman <matthew.garman at gmail.com> wrote: > On Mon, Oct 24, 2016 at 6:09 PM, Larry Martell <larry.martell at gmail.com> wrote: >> The machines are on a local network. I access them with putty from a >> windows machine, but I have to be at the site to do that. > > So that means when you are offsite there is no way to access
2015 Sep 11
2
Cannot open: No space left on device
On Fri, Sep 11, 2015 at 3:19 PM, Dario Lesca <d.lesca at solinos.it> wrote: > the result. # du -sc /* /.??* --exclude /proc|sort -n 0 /.autofsck 0 /.autorelabel 0 /misc 0 /net 0 /sys 4 /cgroup 4 /media 4 /mnt 4 /selinux 4 /srv 8 /opt 16 /home 16 /lost+found 16 /tmp 112 /root 188 /dev 7956 /bin
2009 Feb 19
4
[Bug 1558] New: Sftp client does not correctly process server response messages after write error
https://bugzilla.mindrot.org/show_bug.cgi?id=1558 Summary: Sftp client does not correctly process server response messages after write error Product: Portable OpenSSH Version: 4.3p2 Platform: amd64 OS/Version: Linux Status: NEW Severity: normal Priority: P2 Component: sftp
2016 Oct 26
0
NFS help
On Tue, Oct 25, 2016 at 7:22 PM, Larry Martell <larry.martell at gmail.com> wrote: > Again, no machine on the internal network that my 2 CentOS hosts are > on are connected to the internet. I have no way to download anything., > There is an onerous and protracted process to get files into the > internal network and I will see if I can get netperf in. Right, but do you have