search for: 2.4gb

Displaying 19 results from an estimated 19 matches for "2.4gb".

Did you mean: 1.4gb
2005 Nov 30
2
Too much memory cache being used while moving large file
System : CentOS 4.2 2.6.9-22.0.1.ELsmp System fully up-to-date. 3GB RAM 3ware 9000S card, with Raid5 array. I think that's about all relevant info ... Had file on disk (not array), attempted to mv file to array. Went fine till 2.4GB was copied, then it slowed down to a meg every few minutes. Free memory was ~50MB (Typically is 1.5-2GB), and cache was 2.5GB. Stopped the move, however cache
2003 Dec 19
3
partial transfer
I am attempting to use rsync to backup a Win98 laptop to a FreeBSD 4.8 backup server. I have experienced the same problem at roughly the same point in the process on two occations. The laptop contains ~2.7Gb of data. On the first attempt we received this error at 2.3Gb and on the second at 2.4Gb. rsync error: partial transfer (code 23) at main.c(575) Would love to have a full backup of the
2012 Mar 06
1
zero byte files
I am experiencing data loss on a CIFS share with Samba 3.6.3. I am running Debian Sid on x86. I mount the share with the following line in my fstab: //server/share /mnt/share cifs auto,users,rw,gid=50,dir_mode=0775,file_mode=0777,domain=DOMAIN,credentials=/root/share.credentials The user in the credential file is in the proper domain. GID 50 is "staff", which my user is a member
2007 Dec 19
4
Problem compiling R 3.6.1 on POWER 570 system
I've for a RHEL 4 box on a P570 system. My end user wants to have a 64-bit version of R compiled due to the large amount of memory they require (this image has 16GB allocated to it). I can compile R fine in 32-bit mode, but it can't use more than 2.4GB of RAM before it falls over and dies. Compiling in 64-bit mode for POWER systems "should" be as easy as adding a
2020 Sep 15
1
Btrfs RAID-10 performance
Dne 10.09.2020 v 17:40 John Stoffel napsal(a): >>> So why not run the backend storage on the Netapp, and just keep the >>> indexes and such local to the system? I've run Netapps for many years >>> and they work really well. And then you'd get automatic backups using >>> schedule snapshots. >>> >>> Keep the index files local on
2009 Nov 10
3
Error: cannot allocate vector of size...
I'm trying to import a table into R the file is about 700MB. Here's my first try: > DD<-read.table("01uklicsam-20070301.dat",header=TRUE) Error: cannot allocate vector of size 15.6 Mb In addition: Warning messages: 1: In scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings, : Reached total allocation of 1535Mb: see help(memory.size) 2: In scan(file, what,
2009 Apr 24
0
Really slow CentOS 5 - Starting Applications
Hello; I´m having a problem with a CentOS 5 (x64) with Xen, running 3 DomU (2 Win2k3, 1 Debian). Every DomU works perfect, but Dom0 is extremely slow when starting any application, it doesn´t mother if its something big or a simple terminal. CPU is 98% idle, Memory es just 24% used (2.4gb total left for Dom0) I think that it´s not an I/O issue, ´cause if i open the same application twice; the
2005 Feb 03
1
Memory Cap
Hi all I am trying to use R for some data editing using the "array" function to write binary data to a text file. I realise that R was not designed for this purpose but I am no C programmer and would prefer to use R (as I know how to do it and hate c). Basically I get the following error. Error: cannot allocate vector of size 1634069 kb It seems that R is not keen on allocating
2004 Dec 05
0
Network Write Error
I run a small network at home with two PCs running Win98SE, an NT4 server (actually SBS 4.5) and a Linux firewall (actually Smoothwall 1.0). In the last week I have been getting the "blue screen" on both PCs (Network write error. You have no space available on the server) The NT server has 2 HDs - first drive (4GB) has the operating system (C: drive has 2.5GB free) and the second drive
2004 Dec 30
1
optim/vmmin and R_alloc
I am calling 'vmmin' several times from a C function (which is called via .C). It works very well, except for memory consumption. The cause is that vmmin allocates memory via R_alloc, and this memory is not freed as vmmin exits. Instead all the allocated memory is freed on return of the .C call. In one application, I have 2000 functions of 500 variables each to minimize. In each call to
2005 Mar 03
5
how do i get rid of this blasted echo !!!
Any help on this would be great I have 2 TDM400P's, 2 asterisk servers (running on powerful boxes with FC1 and * v CVS 1.0.02), and 4 analogue PSTN lines from BT and whatever I do, I cannot get rid of this damn local echo. Ive tried setting the echoTraining, echoCancel (in phone.conf and Zapata.conf) , echocancelwhenbridged to every possible combination , Ive even tried running the fxotune
2008 Jan 15
1
inbound Audio problems probably not NAT related?
Hello all, Was hoping to get a sanity check along with a question. Below is the output from top run with normal defaults, except to show both CPU's, on a SuSE 10.2 box with Asterisk v1.4.15. top - 10:00:58 up 3 days, 5:54, 4 users, load average: 0.15, 0.05, 0.01 Tasks: 110 total, 2 running, 108 sleeping, 0 stopped, 0 zombie Cpu0 : 0.2%us, 0.2%sy, 0.0%ni, 97.3%id, 2.2%wa, 0.1%hi, 0.0%si,
2003 Jan 17
2
User Profile Migration Solved
Migration of User Profiles Goal: Old machine with NT 4.0SP6 running as a primary domain controller (PDC) is to be replaced by a new Linux server running samba 2.2.7a. The samba version I used was a precompiled rpm package from SuSE. Environment: 14 Windows 2000 Professional (SP1-SP3) Workstations with 1-2 User accounts 3 Windows NT 4.0 WS (SP6) Workstations with 1-2 User accounts All these are
2017 Jul 17
1
readLines without skipNul=TRUE causes crash
hi, thanks again for taking the time. since corrupted compression prompted the segfault for me in the first place, i've just posted the text file as-is. it's a 2.4GB file so to be avoided on a metered internet connection. i've updated the bugzilla report at https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=17311 with more relevant info. these lines of code crash both windows R
2020 Sep 10
2
Btrfs RAID-10 performance
Dne 09.09.2020 v 17:52 John Stoffel napsal(a): > Miloslav> There is a one PCIe RAID controller in a chasis. AVAGO > Miloslav> MegaRAID SAS 9361-8i. And 16x SAS 15k drives conneced to > Miloslav> it. Because the controller does not support pass-through for > Miloslav> the drives, we use 16x RAID-0 on controller. So, we get > Miloslav> /dev/sda ... /dev/sdp (roughly) in
2017 Jul 16
0
readLines without skipNul=TRUE causes crash
I am stuck. The archive package won't compile for me on Ubuntu, and the CRANextra repo seems to be down so I cannot install packages on Windows right now. Perhaps you can zip the corrupt text file and put it online somewhere? Don't use the archive package to pack it since there seem to be issues with that tool on your machine. I would discourage you from harassing the Brazilian
2006 Jun 27
28
Supporting ~10K users on ZFS
OK, I know that there''s been some discussion on this before, but I''m not sure that any specific advice came out of it. What would the advice be for supporting a largish number of users (10,000 say) on a system that supports ZFS? We currently use vxfs and assign a user quota, and backups are done via Legato Networker. >From what little I currently understand, the general
2017 Jul 16
2
readLines without skipNul=TRUE causes crash
hi, yep, there are two problems -- but i think only the segfault is within the scope of a base R issue? i need to look closer at the corrupted decompression and figure out whether i should talk to the brazilian government agency that creates that .rar file or open an issue with the archive package maintainer. my goal in this thread is only to figure out how to replicate the goofy text file so
2010 Apr 19
20
Lustre Client - Memory Issue
Hi Guys, My users are reporting some issues with memory on our lustre 1.8.1 clients. It looks like when they submit a single job at a time the run time was about 4.5 minutes. However, when they ran multiple jobs (10 or less) on a client with 192GB of memory on a single node the run time for each job was exceeding 3-4X the run time for the single process. They also noticed that the swap space