search for: 35gb

Displaying 15 results from an estimated 15 matches for "35gb".

Did you mean: 32gb
2009 Feb 23
1
Help with R and MySQL
...o, This forum has been very helpful to me in the past, and I've run out of ideas on how to solve my problem. I had been using R and MySQL (and Perl) together for quite some time successfully on my Windows XP machine. However, I recently had some problems with MySQL (the ibdata file had become 35GB on my hard drive, turns out it's a known bug with InnoDB), and ultimately the way I fixed my problem with MySQL was to upgrade it. It's working fine now, I can use MySQL however I'd like. I'm sticking to MyISAM tables for now, though. However, I had set up my system so I did a li...
2006 Sep 13
2
File fragmentation
Wayne.my vote is for a command-line option. I've noticed there is some penalty for very large files (35GB-50GB). The penalty is relatively small based on my 'intuitive' measurements.read me watching without running a real timer. The difference is very small compared to what happens after a few weeks without the fragmentation patch. Our SAN was becoming so fragmented that we were only getting...
2012 Nov 17
2
[Bug 9407] New: rsync transfers cause zero window packets
...when I discovered that fixing the driver did not stop the zero window packets from happening and upon further investigation I discovered that ALL backup sessions are seeing lots of zero window packets, then further investigation shows that backups are taking way too long; as long as ten hours for a 35GB backup. I've googled for zero window errors with rsync and nothing I've found points to anything that could resolve my issue. But how can that be? Am I the only one seeing this issue? Or is it that no one is looking? I am running CentOS 6 and Fedora 17, all of which have rsync 3.0.9-1,...
2009 Jan 22
3
Failure to boot from zfs on Sun v880
...--- Hash: SHA1 Hi. I am trying to move the root volume from an existing svm mirror to a zfs root. The machine is a Sun V880 (SPARC) running nv_96, with OBP version 4.22.34 which is AFAICT the latest. The svm mirror was constructed as follows / d4 m 18GB d14 d14 s 35GB c1t0d0s0 d24 s 35GB c1t1d0s0 swap d3 m 16GB d13 d13 s 16GB c1t0d0s3 d13 s 16GB c1t1d0s3 /var d5 m 8.0GB d15 d15 s 16GB c1t0d0s1 d25 s 16GB c1t1d0s1 I removed c1t1d0 from the mirror: # metad...
2014 Dec 26
3
[LLVMdev] LTO question
....blogspot.com/2014/04/linktime-optimization-in-gcc-2-firefox.html Comparison with LLVM is described in the second article. It took about 40min to finish building Firefox with llvm using lto and -g. The following is a quote: "This graph shows issues with debug info memory use. LLVM goes up to 35GB. LLVM developers are also working on debug info merging improvements (equivalent to what GCC's type merging is) and the situation has improved in last two releases until the current shape. Older LLVM checkouts happily run out of 60GB memory & 60GB swap on my machine.". > > I'...
2016 Jan 22
4
LVM mirror database to ramdisk
I'm still running CentOS 5 with Xen. We recently replaced a virtual host system board with an Intel S1400FP4, so the host went from a 4 core Xeon with 32G RAM to a 6 core Xeon with 48G RAM, max 96G. The drives are SSD. I was recently asked to move an InterBase server from Windows 7 to Windows Server. The database is 30G. I'm speculating that if I put the database on a 35G
2016 Jan 23
0
LVM mirror database to ramdisk
...> Windows Server. The database is 30G. > > I'm speculating that if I put the database on a 35G virtual disk and > mirror it to a 35G RAM disk, the speed of database access might improve. If that were running under Linux rather than Windows I'd suggest just giving that extra 35GB to its kernel and letting its normal caching keep everything in RAM. Whether Windows (7 or Server) would be clever enough to do that is another question. Of course you could just let the Linux host do the caching, but that runs the risk of other VMs or host activity displacing some of that cache an...
2004 Sep 28
1
infinite loop in rsync daemon on Mac OSX
...cess dies with: rsync: connection unexpectedly closed (847940065 bytes read so far) rsync error: error in rsync protocol data stream (code 12) at io.c(342) _exit_cleanup(code=12, file=io.c, line=342): about to call exit(12) ...and it hadn't transferred a single file. There's only about 35GB of data on the macbox, and if I run a bunch of individual rsync commands to copy all macbox's directories one at a time, they all complete fine. So I run the all-at-once rsync command again, but this time I kill it after 10 minutes to check the nohup.out file (which is now about 170MB), and...
2006 Aug 19
1
DO NOT REPLY [Bug 4035] New: creates huge file
...AssignedTo: wayned@samba.org ReportedBy: bob@srdpc.com QAContact: rsync-qa@samba.org I am using rsync with shh to backup remote servers. One system I am backing up wants to create a huge file, greater than 125GB. the hard drive is only 80GB, and the backup is usually about 35GB. I am not having this problem with any other system I am backing up. My backup server uses SME 7.0 server which is based on Centos 4.3, and the server I am backing up uses sme 6.01 which is based on redhat 7.6 (I think). The system had been working properly until a couple of days ago. Need a push...
2006 Feb 09
0
filesize problem...
dear list, following problem with large files: server: samba 3.0.21b debian linux kernel 2.4.32 filesystem ext3 client: windows xp sp2 - i layed a tarball to the directory with a size of 35GB (also tested with a smaller filesize of 7.2 GB) - the filesystem shows the correct filesize. - looking via samba share, there's a filesize of 2.91GB??? - trying to copy a file bigger then 2.91 GB i get an error "filesize exeeded" ================================smb.conf (relevant p...
2014 Dec 15
4
[LLVMdev] LTO question
On Fri, Dec 12, 2014 at 1:59 PM, Diego Novillo <dnovillo at google.com> wrote: > On 12/12/14 15:56, Adve, Vikram Sadanand wrote: >> >> I've been asked how LTO in LLVM compares to equivalent capabilities >> in GCC. How do the two compare in terms of scalability? And >> robustness for large applications? > > > Neither GCC nor LLVM can handle our
2010 Apr 22
1
Odd behavior
Hi Y'all, I'm seeing some interesting behavior that I was hoping someone could shed some light on. Basically I'm trying to rsync a lot of files, in a series of about 60 rsyncs, from one server to another. There are about 160 million files. I'm running 3 rsyncs concurrently to increase the speed, and as each one finishes, another starts, until all 60 are done. The machine
2016 Oct 13
2
GitHub Survey?
...e categorize how you interact with upstream. >> - I need read/write access, and I have limited disk space. >> - I need read/write access, but a 1GB clone doesn't scare me. >> - I only need read access. > > I'm not sure that's critical. My current source repo has 35GB with > just a few worktrees. > > Also, both solutions have low-disk-usage modes, and this would make no > difference on how we proceed. This is a point of contention and a concern that Chris voiced about the monorepo. It should be in the survey. > > > >> 6. How imp...
2016 Oct 13
11
GitHub Survey?
> On 2016-Sep-18, at 09:51, Renato Golin via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Folks, > > After feedback from Chris and Mehdi, I have added one long text answer > to *each* critical questions (impact on productivity), so that people > can extend their reasoning. > > But I have not made them compulsory, so that people that don't know > much
2010 Sep 10
11
Large directory performance
We have been struggling with our Lustre performance for some time now especially with large directories. I recently did some informal benchmarking (on a live system so I know results are not scientifically valid) and noticed a huge drop in performance of reads(stat operations) past 20k files in a single directory. I''m using bonnie++, disabling IO testing (-s 0) and just creating, reading,