search for: bigfil

Displaying 20 results from an estimated 63 matches for "bigfil".

Did you mean: bigfib
2003 Jul 17
1
2 GB Limit when writing to smbfs filesystems
...0 Professional workstation. After experiencing the problem, I installed an NFS client on the Windows 2000 Professional workstation (for testing purposes). Everything works with NFS, but I want to make it work with samba. I DO NOT get an error when: MS copy from NTFS to samba share (i.e. copy c:\bigfile \\linuxsvr\smbshr\bigfile) MS copy from samba share to NTFS (i.e. copy \\linuxsvr\smbshr\bigfile c:\bigfile) Linux copy from ext3 to ext3 (i.e. cp -p /home/bigfile1 /home/bigfile2) Linux copy from NFS to ext3 (i.e. cp -p /nfs/bigfile /home/bigfile) Linux copy from ext3 to NFS (i.e. cp -p /home/...
2009 Jan 28
2
ZFS+NFS+refquota: full filesystems still return EDQUOT for unlink()
...and wait for 10u6, at which point we would put the value of the quota property in the refquota property, and set quota=none. We did this a week or so ago, and we''re still having the problem. Here''s an example: (on the client workstation) willm1 at chasca:~$ dd if=/dev/urandom of=bigfile dd: closing output file `bigfile'': Disk quota exceeded willm1 at chasca:~$ rm bigfile rm: cannot remove `bigfile'': Disk quota exceeded willm1 at chasca:~$ strace rm bigfile execve("/bin/rm", ["rm", "bigfile"], [/* 57 vars */]) = 0 (...) access("...
2012 Oct 20
2
can't find the error in if function... maybe i'm blind?
...ay gives me the error "Fehler in if (File$X.Frame.Number[a] + 1 == File$X.Frame.Number[a + 1]) (File$FishNr[a] <- File$FishNr[a - : Fehlender Wert, wo TRUE/FALSE n?tig ist". Maybe its stupid, but i'm not getting why... Maybe someone can help me. Thanks a lot! for (i in unique(BigFile$TrackAll)) { File <- subset(BigFile,BigFile$TrackAll == i) File$FishNr [1] <- 1 for ( a in File$X.Frame.Number) {if(File$X.Frame.Number[a]+1== File$X.Frame.Number[a+1]) (File$FishNr [a] <- File$FishNr[a-1]) else(if...
2004 Jul 30
1
Problem related to time-stamp
...s. Im using rsync for transfering the file from my m/c (OS :jaluna-linux) to a remote m/c(OS:jaluna-linux) and even if there was no change in the files on my m/c, when i rsync them to remote m/c the time-stamp of the file on remote m/c (which i transfered from my m/c) will change. my file name is bigfile and it is present on both m/c(my as well as remote and both are same). 669 Jul 30 15:11 bigfile(on my m/c) 669 Jul 30 15:08 bigfile (on remote m/c) (both the files have same name and same contants) then i use rsync on my m/c to transfer bigfile on remote. rsync bigfile IpAddress:/home/guest...
2007 Nov 27
1
Syncing to multiple servers
Helle everyone, Let's say we have 3 servers, 2 of them have the latest (stable) version of rsyncd running (2.6.9) <Server1> ==> I N T E R N E T ==> <Server2 (rsyncd running)> ==> LAN ==> <Server3 (rsyncd running)> Suppose I want to send a big file (bigfile.big) from Server1 to both Server2 and Server3. It would be a good idea to send first from Server1 -> Server2 and then from Server2 -> Server3 to avoid sending the same files over the internet (Server2 & 3 are in a Lan and Server 3 is a replicate of server 2). Is there a way to send...
2003 Dec 02
1
rdiff
Is there any chance for rdiff ? I need to frequently synchronize big text file (60MB+) undertaking small changes and I am interested in differences between the subsequent versions [DNS RBL data in dnsbl format, 1E6+ lines of text, new version every 20m, on average 50 new entries (lines) in every synchronization] I would like to get (small) diff file as result of rsync session and apply it to
2007 Mar 02
1
--delete --force Won't Remove Directories With Dotnames
...as the same name as before. Rsync just sees right through .dirs, as if they weren't part of the source and therefore don't need deleting at dest (even if they clearly are under a directory which is). Example: rsync --partial-dir=.partial /foo /bar. Abort the transfer, and /foo/biguns/bigfile gets left in /bar/foo/biguns/.partial/bigfile. Now remove /foo/biguns and do the transfer again, to completion. The /bar/foo/biguns/.partial/bigfile is still there. What do we think? Cheers, Sabahattin -- Sabahattin Gucukoglu <mail<at>sabahattin<dash>gucukoglu<dot>com&...
2009 Apr 22
2
purge-empty-dirs and max-file-size confusion
...the big files. This doesn't work: $ rm -rf /tmp/foo $ rsync -ai --min-size 10M --prune-empty-dirs /home/idallen/test /tmp/foo cd+++++++++ test/ cd+++++++++ test/dir1/ cd+++++++++ test/dir2/ cd+++++++++ test/dir3/ cd+++++++++ test/dir4/ >f+++++++++ test/dir4/BIGFILE cd+++++++++ test/dir5/ >f+++++++++ test/dir5/BIGFILE cd+++++++++ test/dir6/ >f+++++++++ test/dir6/BIGFILE Wrong. I don't want all those dir1, dir2, dir3 empty directories. I don't want *any* empty directories, at any level. What am I missing? -- | Ian! D. Allen -...
2005 Jul 06
2
OpenSSH and non-blocking mode
...just the one used to set non-blocking mode. Thus, you really shouldn't use non-blocking mode unless you're in complete control of the underlying file. Since the standard files (stdin, stdout, and stderr) are inherited from the parent process, that's never the case. Consider: $ cat bigfile | wc -c 10858893 $ (ssh localhost sleep 120& sleep 3; cat bigfile) | wc -c cat: stdout: Resource temporarily unavailable 270336 When ssh puts its stdout into non-blocking mode, it also puts cat's stdout into non-blocking mode. cat notices the problem, but doesn't actually hand...
2016 Oct 26
3
NFS help
On Tue, Oct 25, 2016 at 12:48 PM, Matt Garman <matthew.garman at gmail.com> wrote: > On Mon, Oct 24, 2016 at 6:09 PM, Larry Martell <larry.martell at gmail.com> wrote: >> The machines are on a local network. I access them with putty from a >> windows machine, but I have to be at the site to do that. > > So that means when you are offsite there is no way to access
2002 Mar 27
2
Linux 2.4.18 on RH 7.2 - odd failures
...locks, but the fault I'm about to talk about is evident on other block sizes and controllers as well. I have a little script that makes a bunch of large files to give the filesystem a beating. It goes like this: #!/bin/bash i=1000 for((i=0;i != 301;i++)); do time -p dd if=/dev/zero of=./bigfile.$i bs=1024k count=1024 echo $i done About 4 files in, it dies with 'dd: writing `./bigfile.4': Read-only file system' In the messages file, I see: Mar 27 17:28:15 r5 kernel: journal_bmap: journal block not found at offset 1607 on sd(8,17) Mar 27 17:28:15 r5 kernel: Aborting jou...
2009 Feb 19
4
[Bug 1558] New: Sftp client does not correctly process server response messages after write error
...ons to sftp-server. One of these is a maximum file size - if a write attempts to write past the maximum file size SSH2_FX_PERMISSION_DENIED is returned. When the client receives this it displays permission denied on screen and stops the upload. It then reports ID mismatch and exits. sftp> put bigfile Uploading bigfile to /bigfile bigfile 99% 199MB 6.7MB/s 00:00 ETACouldn't write to remote file "/bigfile": Permission denied ID mi...
2015 Sep 11
2
Cannot open: No space left on device
On Fri, Sep 11, 2015 at 3:19 PM, Dario Lesca <d.lesca at solinos.it> wrote: > the result. # du -sc /* /.??* --exclude /proc|sort -n 0 /.autofsck 0 /.autorelabel 0 /misc 0 /net 0 /sys 4 /cgroup 4 /media 4 /mnt 4 /selinux 4 /srv 8 /opt 16 /home 16 /lost+found 16 /tmp 112 /root 188 /dev 7956 /bin
2002 Aug 01
0
W2k no longer has Trust to samba pdc
...amily read only = No inherit permissions = Yes guest ok = Yes max connections = 20 [files] comment = ALL File Storage HERE! path = /files admin users = mbruntel cbruntel moogirl zbruntel force group = family read only = No inherit permissions = Yes guest ok = Yes max connections = 20 [bigfiles] comment = Big&Files Directory path = /files admin users = mbruntel cbruntel zbruntel moogirl force group = family read only = No inherit permissions = Yes guest ok = Yes max connections = 20 [cdrom] comment = LINUX CDROM! R-0 path = /cd guest ok = Yes max connections = 1 fake o...
2005 Sep 23
2
17G File size limit?
...I'm not sure where the problem is, so I figured I'd start here. I as having problems with Bacula stopping on 17Gig Volume sizes, so I decided to try to Just dd a 50 gig file. Sure enough, once the file hit 17 gigs dd stopped and spit out an error (pandora bacula)# dd if=/dev/zero of=bigfile bs=1M count=50000 File size limit exceeded (pandora bacula)# (pandora bacula)# ll total 20334813 -rw-r--r-- 1 root root 17247252480 Sep 23 00:44 bigfile -rw-r----- 1 root root 302323821 Sep 23 01:10 Default-0001 -rw-r----- 1 root root 156637059 Sep 18 01:08 Diff-wi0001 -rw-r----- 1 root...
2016 Oct 26
0
NFS help
...erf. OK, I just thought of another "poor man's" way to at least do some sanity testing between C6 and C7: scp. First generate a huge file. General rule of thumb is at least 2x the amount of RAM in the C7 host. You could create a tarball of /usr, for example (e.g. "tar czvf /tmp/bigfile.tar.gz /usr" assuming your /tmp partition is big enough to hold this). Then, first do this: "time scp /tmp/bigfile.tar.gz localhost:/tmp/bigfile_copy.tar.gz". This will literally make a copy of that big file, but will route through most of of the network stack. Make a note of how...
2013 Dec 19
1
[Bug 10336] New: Time in file listing with --list-only option is inconsistent whether dst is given or not
...rrent local timezone except the case of using rsync:// protocol. Time in UTC is listed in such case. ## The current timezone is CET, +1 hour # date +%Z CET # rsync -a --list-only /root/local/ localhost::remote drwxr-xr-x 4096 2013/01/25 14:06:05 . -rw-r--r-- 20000000 2013/01/25 14:06:05 bigfile # rsync -a --list-only /root/local/ drwxr-xr-x 4096 2013/01/25 15:06:05 . -rw-r--r-- 20000000 2013/01/25 15:06:05 bigfile #3 rsync -a --list-only /root/local/ localhost:/root/remote root at localhost's password: drwxr-xr-x 4096 2013/01/25 15:06:05 . -rw-r--r-- 20000000 2...
2011 Oct 07
5
[Bug 8512] New: rsync -a slower than cp -a
...Platform: All OS/Version: All Status: NEW Severity: normal Priority: P5 Component: core AssignedTo: wayned at samba.org ReportedBy: linux.news at bucksch.org QAContact: rsync-qa at samba.org Reproduction: 1. du /foo/bigfile 2. echo 3 > /proc/sys/vm/drop_caches 3. time rsync -avp /foo/bigfile /bar/bigfile 4. echo 3 > /proc/sys/vm/drop_caches 5. time cp -a /foo/bigfile /bar/bigfile Actual result: 1. ~1286 MB 3. 27.9s, 45.9 MB/s per calc, 45.61 MB/s according to rsync 5. 14.6s, 88.1 MB/s per calc In other words,...
2002 Sep 27
0
Server writes...?
...rite might be big, as in several dozen or hundreds of megs. Now to my question: What happens if three team members all try to write the same huge file to BigServer at "the same time"? Meaning, the rsync daemon on BigServer gets three connections that start uploading "/shared_space/bigfile.mov", before any one of the connections has finished uploading its complete copy? Is there any chance that the resulting "/shared_space/bigfile.mov" on BigServer would have a corrupted copy, because several clients were uploading at the same time? Or, does the rsync daemon guaran...
2016 Oct 27
2
NFS help
...ught of another "poor man's" way to at least do some > sanity testing between C6 and C7: scp. First generate a huge file. > General rule of thumb is at least 2x the amount of RAM in the C7 host. > You could create a tarball of /usr, for example (e.g. "tar czvf > /tmp/bigfile.tar.gz /usr" assuming your /tmp partition is big enough > to hold this). Then, first do this: "time scp /tmp/bigfile.tar.gz > localhost:/tmp/bigfile_copy.tar.gz". This will literally make a copy > of that big file, but will route through most of of the network stack. >...