search for: 13gb

Displaying 20 results from an estimated 20 matches for "13gb".

Did you mean: 12gb
2018 Sep 18
3
memory footprint of readRDS()
Dear all, I tried to read in a 3.8Gb RDS file on a computer with 16Gb available memory. To my astonishment, the memory footprint of R rises quickly to over 13Gb and the attempt ends with an error that says "cannot allocate vector of size 5.8Gb". I would expect that 3 times the memory would be enough to read in that file, but apparently I was wrong. I checked the memory.limit() and that one gave me a value of more than 13Gb. So I wondered if this...
2009 Nov 08
3
Some basic LVM questions
...able to add storage to the pool and all that. LVM was just kind of catching on when I moved away from Linux for a while, so it's a little odd to me. <br> <br> What I have currently is an older PC that I'm hoping to use as a home server / occasional 'workstation'. One 13GB main drive, and a 500GB drive for network storage. The default install in CentOS 5.4 seems to want to just lump everything together in one big volume. I was thinking perhaps it'd be better to have two volumes (or pools, like I said - still learning and not entirely confident of the lingo invo...
2020 Nov 02
3
A strange problem with my daily backups performed via rsync
...--delete \ --exclude "lost+found" --exclude "System Volume Information" \ /mnt/wall /mnt/sony/ Whenever the line above goes into action, it recursively deletes /mnt/sony/wall and then copies /mnt/wall all over again. Any ideas why this might be happening ? Since /mnt/wall is 13GB, it costs me time and SSD-wear to have /mnt/sony/wall recreated all over again every time I do a backup. Thanks, Manish Jain
2018 Sep 18
0
memory footprint of readRDS()
...p TIBCO Software wdunlap tibco.com On Tue, Sep 18, 2018 at 8:28 AM, Joris Meys <jorismeys at gmail.com> wrote: > Dear all, > > I tried to read in a 3.8Gb RDS file on a computer with 16Gb available > memory. To my astonishment, the memory footprint of R rises quickly to over > 13Gb and the attempt ends with an error that says "cannot allocate vector > of size 5.8Gb". > > I would expect that 3 times the memory would be enough to read in that > file, but apparently I was wrong. I checked the memory.limit() and that one > gave me a value of more than 13G...
2004 Oct 12
2
Hangs on multiple small files
I am running rsync on win9x (PIII 900, 128MB RAM) between a mapped network drive and a local drive. For small folders on this machine rsync works great, but the folder I am syncronising with has 13GB of data and > 1,000,000 files. Out of about 20 attempts rsync has only succeded once, all other times it just seems to hang. Not much info but any ideas? Thanks Scott
2005 Mar 09
1
AIX/Samba vs. Large Files
...en it'll report the disk is full, and delete the file. ?From command line smbclient, it does the same, but doesn't delete the file, and tells me "NT_STATUS_DISK_FULL". ?From Windows XP, it just reports that there's not enough space before the transfer starts. ?The disk has 13GB free. I've also tried 3.0.9 and got the same results. Any thoughts on what I could try next? -c. So shines a good deed in a weary world.
2004 Aug 08
2
System Reqirements HELP
...@lists.digium.com > > > Hi > I want to set up a Asterisk system for homeuse with SIP 2 ISDN. > I want to register up to 25 Sip Accounts at my Provider and I want to > use up to 10 SIP Phones at Home and one ISDN Phone. > Do you think a Celeron 466 MHz machine with 128MB Ram and 13GB of HDD > is enough? > > Moritz > _______________________________________________ > Asterisk-Users mailing list > Asterisk-Users@lists.digium.com > http://lists.digium.com/mailman/listinfo/asterisk-users > To UNSUBSCRIBE or update options visit: > http://lists.digium.co...
2020 Nov 02
0
A strange problem with my daily backups performed via rsync
...ost+found" --exclude "System Volume Information" \ > /mnt/wall /mnt/sony/ > > Whenever the line above goes into action, it recursively deletes > /mnt/sony/wall and then copies /mnt/wall all over again. > > Any ideas why this might be happening ? Since /mnt/wall is 13GB, it > costs me time and SSD-wear to have /mnt/sony/wall recreated all over > again every time I do a backup. > > > Thanks, > Manish Jain > -- ~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._., Kevin Korb Phone: (4...
2003 Aug 11
1
very big files
greeting guys i just mounted my w2k shares to my linux i wated to copy some huge files (63Gb 1 file ) to my linux box, but i see only 1G of that file and it realy copy only 1 GB ... do you have any tip ? regards tomas charvat
2004 Dec 05
0
Network Write Error
...ux firewall (actually Smoothwall 1.0). In the last week I have been getting the "blue screen" on both PCs (Network write error. You have no space available on the server) The NT server has 2 HDs - first drive (4GB) has the operating system (C: drive has 2.5GB free) and the second drive (13GB) has 3 4GB partitions (2.4GB free, 343MB free, 245MB free) so I don't think this is the "real" problem. I cannot find any reference to this error on the MS Knowledge Base. Can anybody offer an explanation (and a remedy?) Regards, John
2002 Oct 02
2
kernel BUG at journal.c:1772!
...l 2.4.18-10 and all errata patches installed. My system has been running for over three years without any problems. All I've done to it in that time is add a mirror set of two 80GB drives, a Promise IDE controller, and upgrade Red Hat through the 7.x series. Right now I have three drives. A 13GB system drive off the motherboard, and two 80GB drives software mirrored on the Promise controller. Everything is running ext3. Recently, one of the drives in the mirror set died. I decided to replace both the drives so that I could have a mirror set with two identical drives. That was about two...
2020 Nov 02
1
A strange problem with my daily backups performed via rsync
...ude "System Volume Information" \ >> /mnt/wall /mnt/sony/ >> >> Whenever the line above goes into action, it recursively deletes >> /mnt/sony/wall and then copies /mnt/wall all over again. >> >> Any ideas why this might be happening ? Since /mnt/wall is 13GB, it >> costs me time and SSD-wear to have /mnt/sony/wall recreated all over >> again every time I do a backup. >> >> >> Thanks, >> Manish Jain >> >
2007 Jul 05
0
Samba backtrace on network copy
...end log-- Is this an issue that I'm running over the maximum number of processes allowed for this particular user? I'm not sure why it's getting a uid of -1. Last night I started a copy of the entire tree from Linux client (Ubuntu), which completed successfully - 350,000 files totally 13GB. Yesterday during the day however and again this morning, it failed again. The only difference I can think of is lack of multiple users on the system. Server is Ubuntu 7.04 running Samba 3.0.24-2ubuntu1.2 with users stored in local db, not running as PDC yet. Any ideas? Thanks! Chris
2005 Jan 03
2
Memory problem ... Again
Happy new year to all; A few days ago, I posted similar problem. At that time, I found out that our R program had been 32-bit compiled, not 64-bit compiled. So the R program has been re-installed in 64-bit and run the same job, reading in 150 Affymetrix U133A v2 CEL files and perform dChip processing. However, the memory problem happened again. Since the amount of physical memory is 64GB, I think
2011 Sep 19
2
cli_push returned NT_STATUS_IO_TIMEOUT
...sfer succeeded or failed. The server is running samba 3.2.5. With samba-3.0.37 this worked without problem; With samba-3.5.3 the transfer regularly fails. The moment that is fails is different... normally it should transfer about 16GB of data but I have seen it fail after 2GB, 8GB and after 13GB. Looking at a tcpdump shows: - at 04:00:07: a packet is send from the client to the server that contains data (wireshark identifies it as 'Write AndX Request') - at 04:00:07: a packet is send from the server to the client to confirm the data (wireshark identifies it as 'Write AndX...
2007 Sep 14
5
ZFS Space Map optimalization
I have a huge problem with space maps on thumper. Space maps takes over 3GB and write operations generates massive read operations. Before every spa sync phase zfs reads space maps from disk. I decided to turn on compression for pool ( only for pool, not filesystems ) and it helps. Now space maps, intent log, spa history are compressed. Not I''m thinking about disabling checksums. All
2009 Nov 17
4
fts squat non-english search for 2 words
Hello, It looks I encoutered a bug or misconfiguration. fts_squat search for subject and body works excellent for English mails. For non-English (in particular, Russian) it works only when query consists of 1 word. Phrases - 2 and more words - always returns nothing. Example: search for "planet" ("???????") returns results, search for "Earth" ("?????") also
2011 Feb 24
0
No subject
which is a stripe of the gluster storage servers, this is the performance I get (note use a file size > amount of RAM on client and server systems, 13GB in this case) : 4k block size : 111 pir4:/pirstripe% /sb/admin/scripts/nfsSpeedTest -s 13g -y pir4: Write test (dd): 142.281 MB/s 1138.247 mbps 93.561 seconds pir4: Read test (dd): 274.321 MB/s 2194.570 mbps 48.527 seconds testing from 8k - 128k block size on the dd, best performance was achieve...
2012 Apr 24
21
no console when using xl toolstack xen 4.1.2
Hello! I was asking for help on the Freenode channel, and I was pointed here. I have a situation where, using xl, I can create a functional PV domU, with or without pv-grub, but I cannot access the console. Firing up xend and using xm works without trouble. Since xend and company is being deprecated, I would like to transition to using the xl toolstack. The system is an Arch Linux system
2010 Jan 16
95
Best 1.5TB drives for consumer RAID?
Which consumer-priced 1.5TB drives do people currently recommend? I had zero read/write/checksum errors so far in 2 years with my trusty old Western Digital WD7500AAKS drives, but now I want to upgrade to a new set of drives that are big, reliable and cheap. As of Jan 2010 it seems the price sweet spot is the 1.5TB drives. As I had a lot of success with Western Digital drives I thought I would