search for: 800m

Displaying 20 results from an estimated 32 matches for "800m".

Did you mean: 8000
2020 Jul 16
0
[RFC for Linux v4 0/2] virtio_balloon: Add VIRTIO_BALLOON_F_CONT_PAGES to report continuous pages
...nfo | grep AnonHugePages: > AnonHugePages: 0 kB > // After VM start, use usemem > // (https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git) > // punch-holes function generates 400m fragmentation pages in the guest > // kernel. > usemem --punch-holes -s -1 800m & > // This is the THP number after this command in the host. > // Some THP is used by VM because usemem will access 800M memory > // in the guest. > cat /proc/meminfo | grep AnonHugePages: > AnonHugePages: 911360 kB > // Connect to the QEMU monitor, setup balloon, and set...
2020 Jul 16
0
[virtio-dev] [RFC for Linux v4 0/2] virtio_balloon: Add VIRTIO_BALLOON_F_CONT_PAGES to report continuous pages
...m host. > > >> // After VM start, use usemem > >> // (https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git) > >> // punch-holes function generates 400m fragmentation pages in the guest > >> // kernel. > >> usemem --punch-holes -s -1 800m & > These lines are from guest. They setups the environment that has a lot of fragmentation pages. > > >> // This is the THP number after this command in the host. > >> // Some THP is used by VM because usemem will access 800M memory > >> // in the guest. >...
2007 Apr 12
10
How to bind the oracle 9i data file to zfs volumes
Experts, I''m installing Oracle 9i on Solaris 10 11/06(update 3),I created some zfs volumes which will be used by oracle data file,as: # zfs create -V 200m ora_pool/controlfile01_200m # zfs create -V 800m ora_pool/system_800m ... # ls -l /dev/zvol/rdsk/ora_pool lrwxrwxrwx 1 root root 39 Apr 11 12:23 controlfile01_200m -> ../../../../devices/pseudo/zfs at 0:1c,raw lrwxrwxrwx 1 root root 39 Apr 11 13:34 system_800m -> ../../../../devices/pseudo/zfs at 0:7c,raw (Pl...
2005 Mar 30
2
about memory
...2296 36136 -/+ buffers/cache: 41008 215720 Swap: 481908 60524 421384 and i want to cluster my data using hclust.my data has 3 variables and 10000 cases.but it fails and saying have not enough memory for the vector size. I read the help doc and use $R --max-vsize=800M to start the R 2.1.0beta under debian linux.but it still can not get the solution.so is my pc'memory not enough to carry this analysis or my mistake on setting the memory? thank you.
2015 Mar 17
2
Reduce memory peak when serializing to raw vectors
Hi, I've been doing some tests using serialize() to a raw vector: df <- data.frame(runif(50e6,1,10)) ser <- serialize(df,NULL) In this example the data frame and the serialized raw vector occupy ~400MB each, for a total of ~800M. However the memory peak during serialize() is ~1.2GB: $ cat /proc/15155/status |grep Vm ... VmHWM: 1207792 kB VmRSS: 817272 kB We work with very large data frames and in many cases this is killing R with an "out of memory" error. This is the relevant code in R 3.1.3 in src/main...
2020 Mar 12
0
[RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue
...ation pages, use virtio_balloon > will split THP of QEMU when it calls MADV_DONTNEED madvise to release > the balloon pages. > This is an example in a VM with 1G memory 1CPU: > cat /proc/meminfo | grep AnonHugePages: > AnonHugePages: 0 kB > > usemem --punch-holes -s -1 800m & > > cat /proc/meminfo | grep AnonHugePages: > AnonHugePages: 976896 kB > > (qemu) device_add virtio-balloon-pci,id=balloon1 > (qemu) info balloon > balloon: actual=1024 > (qemu) balloon 624 > (qemu) info balloon > balloon: actual=624 > > cat /proc/mem...
2009 Feb 17
1
Error: IMAP(user): nfs_flush_file_handle_cache_dir: rmdir(/var/mail) failed: Device busy
Hi, I am running dovecot 1.1.10 and 1.1.11 on Solaris 10, mailboxes are in mbox format and served by Solaris 10 NFSv3, the dovecot cache is on local disk. Some of my users always had above nfs error and their mailbox usually very big, over 100M, 400M, or even 800M. Grandy
2015 Mar 17
2
Reduce memory peak when serializing to raw vectors
...t; I've been doing some tests using serialize() to a raw vector: > > > > df <- data.frame(runif(50e6,1,10)) > > ser <- serialize(df,NULL) > > > > In this example the data frame and the serialized raw vector occupy > ~400MB each, for a total of ~800M. However the memory peak during > serialize() is ~1.2GB: > > > > $ cat /proc/15155/status |grep Vm > > ... > > VmHWM: 1207792 kB > > VmRSS: 817272 kB > > > > We work with very large data frames and in many cases this is killi...
2007 Sep 30
7
Some questions about PCI-passthrough for HVM(Non-IOMMU)
...passthrough of a modern graphics card? I have tried the direct-io.hg subtree, but I just can''t boot the Vista HVM domain with nativedom=1 option. Xen boots without any problem with nativedom=1, nativedom_mem=1024M option(here the 1024M memory is reserved for the HVM) and dom0_mem=800M option(I have 2G RAM totally). But when I type xm cr vista.hvm to create the HVM domain(note that it is bootable without the nativedom=1 option), sometimes a disk read error occurs(as displayed on the HVM screen), sometimes it just appear to be dead locked and sometimes it says: "Error: Device...
2007 Sep 30
7
Some questions about PCI-passthrough for HVM(Non-IOMMU)
...passthrough of a modern graphics card? I have tried the direct-io.hg subtree, but I just can''t boot the Vista HVM domain with nativedom=1 option. Xen boots without any problem with nativedom=1, nativedom_mem=1024M option(here the 1024M memory is reserved for the HVM) and dom0_mem=800M option(I have 2G RAM totally). But when I type xm cr vista.hvm to create the HVM domain(note that it is bootable without the nativedom=1 option), sometimes a disk read error occurs(as displayed on the HVM screen), sometimes it just appear to be dead locked and sometimes it says: "Error: Device...
2010 Aug 18
2
Yet another memory limit problem
Dear List, I have read and read and still don't get why I am getting a memory issue.? I am using a Samsung PC running Windows 7.? I have set memory in the target field: ?"C:\Program Files (x86)\R\R-2.11.1\bin\Rgui.exe" --max-mem-size=3G But when I try a simple plot I get: >?? plot(mydat$Date,mydat$Time) Error: cannot allocate vector of size 831.3 Mb > memory.size() [1]
2003 Aug 26
1
Long pause.
...eating CPUtime on the destination for over half an hour. I don't know what it was doing before that. The stdout of the rsync reported: 4675350 files to consider (which is about right) On the source machine it's using about 470 Mb of memory. On the destination machine it's using 800M of memory and growing. The rsync process on the destination machine is not doing ANY system calls. Oh! There ARE a lot of hardlinks involved. The destination machine IS swapping a bit: procs memory swap io system cpu r b w swpd free buff...
2003 Nov 20
1
Large RAM (> 4G) and rsync still dies?
...- just downloaded it a few days ago) shows us that we reach a little over 8 million files before the server starts telling us its killing processes because its out of RAM. On the FAQ, it says that rsync should consume approximately 100bytes of memory per file, on average. So, 8 million x 100 = 800M of RAM. Why are we running out of RAM? Is there a way to tell the kernel not to use so much memory for cache and buffers, and to leave more free? Is the kernel not releasing the cache/buffer memory quick enough for rsync? I don't know, otherwise I wouldn't be here asking these question...
2020 Mar 12
2
[RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue
...ation pages, use virtio_balloon > will split THP of QEMU when it calls MADV_DONTNEED madvise to release > the balloon pages. > This is an example in a VM with 1G memory 1CPU: > cat /proc/meminfo | grep AnonHugePages: > AnonHugePages: 0 kB > > usemem --punch-holes -s -1 800m & > > cat /proc/meminfo | grep AnonHugePages: > AnonHugePages: 976896 kB > > (qemu) device_add virtio-balloon-pci,id=balloon1 > (qemu) info balloon > balloon: actual=1024 > (qemu) balloon 624 > (qemu) info balloon > balloon: actual=624 > > cat /proc/mem...
2020 Mar 12
2
[RFC for Linux] virtio_balloon: Add VIRTIO_BALLOON_F_THP_ORDER to handle THP spilt issue
...ation pages, use virtio_balloon > will split THP of QEMU when it calls MADV_DONTNEED madvise to release > the balloon pages. > This is an example in a VM with 1G memory 1CPU: > cat /proc/meminfo | grep AnonHugePages: > AnonHugePages: 0 kB > > usemem --punch-holes -s -1 800m & > > cat /proc/meminfo | grep AnonHugePages: > AnonHugePages: 976896 kB > > (qemu) device_add virtio-balloon-pci,id=balloon1 > (qemu) info balloon > balloon: actual=1024 > (qemu) balloon 624 > (qemu) info balloon > balloon: actual=624 > > cat /proc/mem...
2013 Aug 25
2
Backend for Lucene format indexes-How to get doclength
...ally, I've > download a single file from wiki, which include 1,000,000 lines, I'v treat > one line as a document) from wiki. When doing single term seach, > performance of Lucene backend is as fast as xapian Chert. > Test environment, OS: Vitual machine Ubuntu, CPU: 1 core, MEM: 800M. > 242 terms, doing single term seach per term, cacultes the total time used > for these 242 searches(results are fluctuant, so I give 10 results per > backend): > 1. backend Lucene > 1540ms, 1587ms, 1516ms, 1706ms, 1690ms, 1597ms, 1376ms, 1570ms, 1218ms, > 1551ms > 2. backend...
2015 Mar 17
0
Reduce memory peak when serializing to raw vectors
...com> wrote: > > Hi, > > I've been doing some tests using serialize() to a raw vector: > > df <- data.frame(runif(50e6,1,10)) > ser <- serialize(df,NULL) > > In this example the data frame and the serialized raw vector occupy ~400MB each, for a total of ~800M. However the memory peak during serialize() is ~1.2GB: > > $ cat /proc/15155/status |grep Vm > ... > VmHWM: 1207792 kB > VmRSS: 817272 kB > > We work with very large data frames and in many cases this is killing R with an "out of memory" error. > > Thi...
2015 Mar 17
0
Reduce memory peak when serializing to raw vectors
Hi, I've been doing some tests using serialize() to a raw vector: df <- data.frame(runif(50e6,1,10)) ser <- serialize(df,NULL) In this example the data frame and the serialized raw vector occupy ~400MB each, for a total of ~800M. However the memory peak during serialize() is ~1.2GB: $ cat /proc/15155/status |grep Vm ... VmHWM: 1207792 kB VmRSS: 817272 kB We work with very large data frames and in many cases this is killing R with an "out of memory" error. This is the relevant code in R 3.1.3 in src/main...
2011 Jan 16
0
No subject
Does using dom0_mem=800M have the same issue? > > Please provide all informations. xm dmesg, the kernel log, xm info. > > How, if the server reboots when I do it? In that case please can you setup a serial console. Ian.
2003 May 23
0
_LOW_ACCURACY_ good enough?
...(For what it's worth, my current test case in tremor _LOW_ACCURACY_ runs in about 630M cycles; without _LOW_ACCURACY_ but with other PS2-specific optimizations, it runs in 710M cycles; vorbis with some minor floating-point-muladd optimizations and the longs changed to ogg_int32_t runs in about 800M cycles -- I suspect the huge trig lookups in mdct are killing me there). -Dave --- >8 ---- List archives: http://www.xiph.org/archives/ Ogg project homepage: http://www.xiph.org/ogg/ To unsubscribe from this list, send a message to 'vorbis-dev-request@xiph.org' containing only the wor...