search for: 950mb

Displaying 7 results from an estimated 7 matches for "950mb".

Did you mean: 50mb
2002 Aug 20
1
R doesn't use all available memory
Hi, I'm running R 1.5.1 under Solaris 5.8 (i86), 2GB physical RAM. R can't run memory intensive jobs even though the system reports plenty of free memory: R seems to refuse to take more than about 800Mb even though there is apparently about 950Mb still free - not to mention swap space. I've tried changing mem.limits() etc. with no effect. The result is an error of the ilk "Error: cannot allocate vector of size 105183 Kb" (or 7000kb,or 200kb, or whatever, depending on what I am attempting to do, once I hit that limit). Thi...
2017 Oct 27
5
Poor gluster performance on large files.
...s similar): Fuse mount: 1000MB/s write 325MB/s read Distributed only servers 1+2: Fuse mount on server 1: 900MB/s write iozone 4 streams 320MB/s read iozone 4 streams single stream read 91MB/s @64K, 141MB/s @1M simultaneous iozone 4 stream 5G files Server 1: 1200MB/s write, 200MB/s read Server 2: 950MB/s write, 310MB/s read I did some earlier single brick tests with samba VFS and 3 workstations and got up to 750MB/s write and 800MB/s read aggregate but that's still not good. These are the only volume settings tweaks I have made (after much single box testing to find what actually made a d...
2017 Oct 30
0
Poor gluster performance on large files.
...B/s read > > Distributed only servers 1+2: > Fuse mount on server 1: > 900MB/s write iozone 4 streams > 320MB/s read iozone 4 streams > single stream read 91MB/s @64K, 141MB/s @1M > simultaneous iozone 4 stream 5G files > Server 1: 1200MB/s write, 200MB/s read > Server 2: 950MB/s write, 310MB/s read > > I did some earlier single brick tests with samba VFS and 3 workstations > and got up to 750MB/s write and 800MB/s read aggregate but that's still not > good. > > These are the only volume settings tweaks I have made (after much single > box testing...
2017 Oct 27
0
Poor gluster performance on large files.
...s read > > Distributed only servers 1+2: > Fuse mount on server 1: > 900MB/s write iozone 4 streams > 320MB/s read iozone 4 streams > single stream read 91MB/s @64K, 141MB/s @1M > simultaneous iozone 4 stream 5G files > Server 1: 1200MB/s write, 200MB/s read > Server 2: 950MB/s write, 310MB/s read > > I did some earlier single brick tests with samba VFS and 3 workstations and got up to 750MB/s write and 800MB/s read aggregate but that's still not good. > > These are the only volume settings tweaks I have made (after much single box testing to find wh...
2006 Aug 24
5
unaccounted for daily growth in ZFS disk space usage
We finally flipped the switch on one of our ZFS-based servers, with approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is a RAID5 volume on the adaptec card). We have snapshots every 4 hours for the first few days. If you add up the snapshot references it appears somewhat high versus daily use (mostly mail boxes, spam, etc changing), but say an aggregate of no more than 400+MB a day. However,...
2010 May 05
0
[LLVMdev] Another bad binutils?
...om> > To: Samuel Crow <samuraileumas at yahoo.com> > Sent: Wed, May 5, 2010 3:36:34 PM > Subject: Re: [LLVMdev] Another bad binutils? > > You will need a 1GB guest. Linking is the biggest memory hog of the entire build > process. I've measured it to require 850MB to 950MB for linking clang on various > 32 and 64 bit linux distros. If you try and link on a 512MB vbox, it will take > an order (or more) magnitude longer to link due to swapping than a 1GB guest. > --mike-m On 2010-05-05, at 4:34 PM, Samuel Crow > wrote: > 384 MBytes RAM > >...
2011 Mar 29
0
Poor IO performance in hosts and VMs - XCP 1.0 b42052
...tu install, I am able to saturate the network both over NFS and iSCSI before the storage starts struggling. Furthermore, raw network has been tested to/from Ubuntu installs, XCP hosts and VMs - I do lose the expected 3% pure network speed as soon as a VM is involved, but apart from that, I get 930-950Mb/s, which I am happy with. However, as soon as I start testing from a XCP host, the iSCSI performance drops from the 50-55 MB/s in Ubuntu to 25-30 in XCP. What really bugs me, is the VM performance - on a good day, I get 10MB/s writes to the iSCSI root VHDs, with short peaks of 20-25. The performa...