search for: vmem

Displaying 20 results from an estimated 20 matches for "vmem".

Did you mean: mem
2007 Sep 05
4
a piece of code in dtrace pseudo device
Dear all: In dtrace.c, function dtrace_probe_create(), there''re a piece of code: id = (dtrace_id_t)(uintptr_t)vmem_alloc(dtrace_arena, 1, VM_BESTFIT | VM_SLEEP); id is uint32_t, and I think id is used as an index to array dtrace_probes[] but why not just use id = cur_value + 1, cur_value is a global variable to record the lastest id? Is this a trick? for what? Thanks :) Regards, TJ -------------- next...
2006 Oct 31
0
6324745 vmem memory leak in the procfs PAGEDATA subsystem.
Author: peterte Repository: /hg/zfs-crypto/gate Revision: b681de1a640aeda1c2465325a301eec62c555cef Log message: 6324745 vmem memory leak in the procfs PAGEDATA subsystem. 6329403 hrm_init() cannot ever return -1 but in various places we check for that. 6330765 procfs pagedata can panic machine. Files: update: usr/src/uts/common/vm/hat_refmod.c
2008 Nov 14
0
fork failure - vmem stats
I was wondering if a fork failure with EAGAIN will manifest as an allocation failure in vmem, if so how to see that and if there is a specifc cache that can be tweaked. At present the only cache that is reporting any failure is kmem_lp 1002438656 1002438656 239 1858 thanks -- This message posted from opensolaris.org
2017 Jul 12
2
submitting R scripts with command_line_arguments to PBS HPC clusters
...... I am getting an error message. (I am not posting the error message, because the R script I wrote works fine when it is run from a regular terminal ..) Please may I ask, how do you usually submit the R scripts with command line arguments to PBS HPC schedulers ? qsub -d $PWD -l nodes=1:ppn=4 -l vmem=10gb -m bea -M tanasa at gmail.com \ -v TUMOR="tumor.bam",GERMLINE="germline.bam",CHR="chr22" \ -e script.efile.chr22 \ -o script.ofile.chr22 \ script.R Thank you very very much ! -- bogdan [[alternative HTML version deleted]]
2013 Jun 25
1
Re: Permission denied
...ne that now, but still have the same problem. $ ls -la total 15899728 drwxrwxrwx 2 root root 4096 Jun 25 21:28 . drwx--x--x 3 root root 4096 Jun 25 21:13 .. -rwxrwxrwx 1 root root 1474560 Jun 24 20:00 floppy_disk_image -rwxrwxrwx 1 root root 1073741824 Jun 24 20:27 WinXPPro-c21a8baf.vmem -rwxrwxrwx 1 root root 135507536 Jun 24 20:12 WinXPPro-c21a8baf.vmss -rwxrwxrwx 1 root root 8684 Jun 24 20:18 WinXPPro.nvram -rwxrwxrwx 1 root root 2145255424 Jun 24 20:12 WinXPPro-s001.vmdk -rwxrwxrwx 1 root root 2146238464 Jun 24 20:33 WinXPPro-s002.vmdk -rwxrwxrwx 1 root root 2146631680 J...
2011 May 05
3
converting save/dump output into physical memory image
...39;t get saved in the "standard" input format for memory forensics tools, which is a raw physical memory image. (This is what you'd get via the classical "dd /dev/mem" approach or the contemporary equivalent using the crash driver; and VMware Server and Workstation produce .vmem files, which are such raw physical memory images, when a guest is paused or snapshotted.) In order to analyze the memory of Libvirt/KVM guests with my Linux memory forensics software, Second Look, I've created a tool for converting Libvirt-QEMU-save files (output of virsh save command) or QEMU...
2013 Jun 25
2
Permission denied
.... $ ls -la total 15899728 drwxrwxr-x 2 roland roland 4096 Jun 25 16:05 . drwxr-xr-x 92 roland roland 4096 Jun 25 18:15 .. -rw------- 1 libvirt-qemu kvm 1474560 Jun 24 20:00 floppy_disk_image -rw------x 1 libvirt-qemu kvm 1073741824 Jun 24 20:27 WinXPPro-c21a8baf.vmem -rw------x 1 libvirt-qemu kvm 135507536 Jun 24 20:12 WinXPPro-c21a8baf.vmss -rw----r-x 1 libvirt-qemu kvm 8684 Jun 24 20:18 WinXPPro.nvram -rw----r-x 1 libvirt-qemu kvm 2145255424 Jun 24 20:12 WinXPPro-s001.vmdk -rw----r-x 1 libvirt-qemu kvm 2146238464 Jun 24 20:33 WinXPPro-s...
2017 Jul 12
0
submitting R scripts with command_line_arguments to PBS HPC clusters
...e. >(I am not posting the error message, because the R script I wrote works >fine when it is run from a regular terminal ..) > >Please may I ask, how do you usually submit the R scripts with command >line >arguments to PBS HPC schedulers ? > >qsub -d $PWD -l nodes=1:ppn=4 -l vmem=10gb -m bea -M tanasa at gmail.com \ >-v TUMOR="tumor.bam",GERMLINE="germline.bam",CHR="chr22" \ >-e script.efile.chr22 \ >-o script.ofile.chr22 \ >script.R > >Thank you very very much ! > >-- bogdan > > [[alternative HTML version deleted...
2017 Jul 12
1
submitting R scripts with command_line_arguments to PBS HPC clusters
...message, because the R script I wrote works >> fine when it is run from a regular terminal ..) >> >> Please may I ask, how do you usually submit the R scripts with command >> line >> arguments to PBS HPC schedulers ? >> >> qsub -d $PWD -l nodes=1:ppn=4 -l vmem=10gb -m bea -M tanasa at gmail.com \ >> -v TUMOR="tumor.bam",GERMLINE="germline.bam",CHR="chr22" \ >> -e script.efile.chr22 \ >> -o script.ofile.chr22 \ >> script.R >> >> Thank you very very much ! >> >> -- bogdan >...
2018 Mar 21
4
rsync very very slow with multiple instances at the same time.
...hours to complete. Here are my options : /usr/local/bin/rsync3 --rsync-path=/usr/local/bin/rsync3 -aHXxvE --stats --numeric-ids --delete-excluded --delete-before --human-readable —rsh="ssh -T -c aes128-ctr -o Compression=no -x" -z --skip-compress=gz/bz2/jpg/jpeg/ogg/mp3/mp4/mov/avi/vmdk/vmem --inplace --chmod=u+w --timeout=60 —exclude=‘Caches' —exclude=‘SyncService' —exclude=‘.FileSync' —exclude=‘IMAP*' —exclude=‘.Trash' —exclude='Saved Application State' —exclude='Autosave Information' --exclude-from=/Users/pabittan/.UserSync/exclude-list --max-size...
2018 Mar 23
0
rsync very very slow with multiple instances at the same time.
...gt; /usr/local/bin/rsync3 --rsync-path=/usr/local/bin/rsync3 -aHXxvE --stats >>> --numeric-ids --delete-excluded --delete-before --human-readable >>> —rsh="ssh -T -c aes128-ctr -o Compression=no -x" -z >>> --skip-compress=gz/bz2/jpg/jpeg/ogg/mp3/mp4/mov/avi/vmdk/vmem --inplace >>> --chmod=u+w --timeout=60 —exclude=‘Caches' —exclude=‘SyncService' >>> —exclude=‘.FileSync' —exclude=‘IMAP*' —exclude=‘.Trash' —exclude='Saved >>> Application State' —exclude='Autosave Information' >>> --exclude-fr...
2006 Jan 13
26
A couple of issues
I''ve been testing ZFS since it came out on b27 and this week I BFUed to b30. I''ve seen two problems, one I''ll call minor and the other major. The hardware is a Dell PowerEdge 2600 with 2 3.2GHz Xeons, 2GB memory and a perc3 controller. I have created a filesystem for over 1000 users on it and take hourly snapshots, which destroy the one from 24 hours ago, except the
2018 Mar 23
1
Aw: Re: rsync very very slow with multiple instances at the same time.
An HTML attachment was scrubbed... URL: <http://lists.samba.org/pipermail/rsync/attachments/20180323/66c46d5a/attachment.html>
2016 Sep 23
2
RFC: Implement variable-sized register classes
> On Sep 23, 2016, at 1:01 PM, Sean Silva via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > > > On Tue, Sep 20, 2016 at 10:32 AM, Krzysztof Parzyszek via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: > I have posted a patch that switches the API to one that supports this (yet non-existent functionality) earlier: >
2005 May 19
1
R 2.1.0 RH Linux Built from Source Segmentation Fault
...thwestern.edu> Date: Wed, 18 May 2005 4:47:19 pm CDT Subject: PBS JOB 1534.seldon PBS Job Id: 1534.seldon Job Name: mnl300_z Execution terminated Exit_status=139 resources_used.cpupercent=98 resources_used.cput=00:25:02 resources_used.mem=233536kb resources_used.ncpus=1 resources_used.vmem=250252kb resources_used.walltime=00:25:13 ===========End of original message text=========== The second source file contains: rmnlRwMetrop1=function(Data,Prior,Mcmc,beta0) { # # purpose: # draw from posterior for MNL using Independence Metropolis # # Arguments: # Data - list of m,X,y # m...
2008 Jul 22
3
6.3-RELEASE-p3 recurring panics on multiple SM PDSMi+
We have 10 SuperMicro PDSMi+ 5015M-MTs that are panic'ing every few days. This started shortly after upgrade from 6.2-RELEASE to 6.3-RELEASE with freebsd-update. Other than switching to a debugging kernel, a little sysctl tuning, and patching with freebsd-update, they are stock. The debugging kernel was built from source that is also being patched with freebsd-update. These systems are
2006 Oct 31
0
6362982 namespace pollution/protection in libc
...libsocket/inet/ether_addr.c update: usr/src/lib/libumem/common/linktest_stand.c update: usr/src/lib/libumem/common/misc.c update: usr/src/lib/libumem/common/stub_stand.c update: usr/src/lib/libumem/common/umem.c update: usr/src/lib/libumem/common/umem_fork.c update: usr/src/lib/libumem/common/vmem.c update: usr/src/lib/libumem/common/vmem_base.c update: usr/src/lib/libumem/common/vmem_sbrk.c update: usr/src/lib/libuutil/Makefile.com update: usr/src/lib/libuutil/common/libuutil_common.h update: usr/src/lib/libxnet/Makefile.com update: usr/src/lib/nsswitch/files/Makefile.com update: usr...
2011 May 29
22
[Bug 8177] New: Problems with big sparsed files
.../attrs were not transferred (see previous errors) (code 2 3) at main.c(1518) [generator=3.0.8] I did use this sentence: rsync-static -av --rsync-path=/bin/rsync-static --rsh="ssh -i /root/.ssh/${REMOTE_HOST}" --exclude="*00000*" --exclude="*.vswp" --exclude="*. vmem" --exclude="*Snapshot*" --exclude="*lck" --exclude="*.vmsd" --exclude="*.vm ss" --exclude="*.vmx" --sparse --delete ${REMOTE_HOST}:/vmfs/volumes/${LUN}/${VM NAME}/ . This is a bug? I did check this "bug" is similar to 3925, but this...
2006 Nov 02
11
ZFS and memory usage.
ZFS works really stable on FreeBSD, but I''m biggest problem is how to control ZFS memory usage. I''ve no idea how to leash that beast. FreeBSD has a backpresure mechanism. I can register my function so it will be called when there are memory problems, which I do. I using it for ARC layer. Even with this in place under heavy load the kernel panics, because memory with KM_SLEEP
2009 Sep 09
1
oVirt Appliance / Single Machine Install
The following two patches fixes / reimplements the oVirt appliance project, installing the entire oVirt stack including all server and node components on one machine. These patches are intended to be checked out and used to build the appliance rpm, after which it is installed provides the /usr/sbin/ovirt-appliance-ctrl script to install/uninstall the appliance. The first patch merely removes