search for: unallocation

Displaying 20 results from an estimated 262 matches for "unallocation".

Did you mean: allocation
2016 Mar 16
2
[PATCH 0/2] blkls API to extract unallocated blocks
The blkls API downloads on the host a range of unallocated blocks on the virtual disk image. This allows to recover deleted data on filesystems where icat fails. Example: guestfish --ro -a /home/noxdafox/ubuntu.qcow2 ><fs> run ><fs> mount /dev/sda1 / ><fs> write /test.txt "$foo$bar$" ><fs> rm /test.txt ><fs> umount / ><fs> blkls
2004 Apr 21
0
Problem with Operator Unallocated number message
We have set up an Asterisk PBX managing a EuroPRI in Italy. We have conneccted to the asterisk PBX some Cisco IP Phones and a Panasonic PBX with 10 analogic phones. If we dial an unassigned telephone number we are not able to listen to PSTN Operator message telling that the subscriber does not exist both on IP phones and analogic ones. Asterisk simply hungs up. We have repeated the test using a
2011 Apr 01
0
[LLVMdev] Unallocated address error triggered from ::RALinScan::assignRegOrStackSlotAtInterval on i386
Hi Yuri, > I am debugging the memory issue that manifests itself like this: > > *** glibc detected *** ../app/app.OWS: free(): invalid pointer: 0x0ad391fc *** try running under valgrind. Note that if the program being JIT'd corrupts memory then this can cause the JIT itself to blow up. Ciao, Duncan.
2012 Oct 23
4
[LLVMdev] ABI: how to let the backend know that an aggregate should be allocated on stack
Hi All, I am trying to handle the Homogeneous Aggregate for ARM-VFP according to the spec: C.1.vfp If the argument is a VFP CPRC and there are sufficient consecutive VFP registers of the appropriate type unallocated then the argument is allocated to the lowest-numbered sequence of such registers. C.2.vfp If the argument is a VFP CPRC then any VFP registers that are unallocated are marked as
2011 Mar 31
2
[LLVMdev] Unallocated address error triggered from ::RALinScan::assignRegOrStackSlotAtInterval on i386
I am debugging the memory issue that manifests itself like this: *** glibc detected *** ../app/app.OWS: free(): invalid pointer: 0x0ad391fc *** ======= Backtrace: ========= /lib/libc.so.6(+0x6c501)[0x4f6501] /lib/libc.so.6(+0x6dd70)[0x4f7d70] /lib/libc.so.6(cfree+0x6d)[0x4fae5d] ../app/app.OWS(_ZNSt8_Rb_treeIjjSt9_IdentityIjESt4lessIjESaIjEE5eraseESt17_Rb_tree_iteratorIjES7_+0x4b)[0x83de6eb]
2012 Sep 26
1
[LLVMdev] Modifying address-sanitizer to prevent threads from sharing memory
Hi llvm-dev! I'm writing my master's thesis on sandboxing/isolation of plugins running in a multithreaded environment. These plugins run in a real-time environment where the cost of IPC/context switching and being at the scheduler's mercy is not an option. There can be a lot of plugin instances running and all have to perform some computations and return the result to the main thread
2010 Nov 12
3
[Xen-API] problem with snapshot unallocation
Hi all, i use XCP 0.5 with a NFS storage repository and normally i do daily snapshots from all VMs i have, as follow: 1) create snapshot 2) export snapshot to file 3) delete snapshot One day I get the VM error "Snapshot chain is too long". I notice however that this appears to only VM''s with more than 1 attached virtual disk. In my case I have a couple
2011 Feb 28
5
Failover Routing
Hi, I am doing failover routing based on 2 dial commands. First route sends back 4xx response and I don't want it to try 2nd route when it is 4xx response. Can we do failover routing based on SIP 5xx response only ? Thanks Deepika -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Dec 15
9
btrfs balance on single device
Hey all, Just did a btrfs balance on a single device. Before the balance operation here is the df result: inglor@tiamat ~$ btrfs fi df /home Data: total=19.19GB, used=9.34GB System, DUP: total=32.00MB, used=4.00KB Metadata, DUP: total=896.00MB, used=227.98MB Then I issues a balance operation relocating the chunks across a single device: inglor@tiamat ~$ sudo btrfs fi balance /home [sudo]
2012 Sep 26
5
sparse to no sparse
Hi. I have an old Xen para virt vm which I created using sparse file. Is there any way to convert this vm image to non-sparse without shutting down the vm?. Thanks Paras.
2020 Oct 16
3
[libnbd PATCH] info: Add support for new 'qemu-nbd -A' qemu:allocation-depth
A rather trivial decoding; we may enhance it further if qemu extends things to give an integer depth alongside its tri-state encoding. --- I'll wait to push this to libnbd until the counterpart qemu patches land upstream, although it looks like I've got positive review. info/nbdinfo.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/info/nbdinfo.c b/info/nbdinfo.c index
2015 Nov 12
4
Fwd: asan for allocas on powerpc64
(Resending with the correct mailing list address.) Hi, Currently test/asan/TestCases/alloca_vla_interact.cc is XFAILed for powerpc64. I've had a look at why it doesn't work. I think the only problem is in the call to __asan_allocas_unpoison that is inserted at the end of the "for" loop (just before a stackrestore instruction). The call function is created something like this
2012 Oct 23
0
[LLVMdev] ABI: how to let the backend know that an aggregate should be allocated on stack
On Tue, Oct 23, 2012 at 11:22 AM, manman ren <mren at apple.com> wrote: > > Hi All, > > I am trying to handle the Homogeneous Aggregate for ARM-VFP according to the > spec: > > C.1.vfp If the argument is a VFP CPRC and there are sufficient consecutive > VFP registers of the appropriate type unallocated then the argument is > allocated to the lowest-numbered
2007 Oct 24
1
Problem with file system
While I untar a large archive on xfs , ext3 (ver 1.3 and ver 1.4) file systems , on ppc processor and kernel ver 2.6.21 , I get an error. Also sometimes, on ext3 (1.3 and 1.4) the file system goes read-only while untarring. The same tar file when untarred on a i386 machine works properly. ERROR: -------------- tar: Skipping to next header gzip: stdin: invalid compressed data--crc error tar:
2016 Jun 29
2
[PATCH 0/2] Added download_blocks API
With this API we complete the set of functions required to extract deleted files/data from most of the available filesystems. The function allows to extract data units (blocks) within a given range from a partition. The tests show an example on how the function can be used to retrieve deleted data. Matteo Cafasso (2): New API: download_blocks Added download_blocks API test
2012 May 06
4
btrfs-raid10 <-> btrfs-raid1 confusion
Greetings, until yesterday I was running a btrfs filesystem across two 2.0 TiB disks in RAID1 mode for both metadata and data without any problems. As space was getting short I wanted to extend the filesystem by two additional drives lying around, which both are 1.0 TiB in size. Knowing little about the btrfs RAID implementation I thought I had to switch to RAID10 mode, which I was told is
2016 Jul 17
4
[PATCH v2 0/2] Added download_blocks API
v2: - Rebase on top of master Matteo Cafasso (2): New API: download_blocks Added download_blocks API test daemon/sleuthkit.c | 41 ++++++++++++++++++++++++++- generator/actions.ml | 24 ++++++++++++++++ gobject/Makefile.inc | 2 ++ src/MAX_PROC_NR | 2 +- tests/tsk/Makefile.am | 1 +
2012 Sep 26
0
[LLVMdev] Modifying address-sanitizer to prevent threads from sharing memory
Hi Peter, Yes, the idea sounds feasible. You could use 8:1 or even smaller shadow. E.g. if you can live with 64-byte aligned mallocs/allocas, you can have very compact 64:1 mapping,. As you mention, having 1 byte shadow may be not 100% accurate, so you may chose 2 byte shadow (e.g. 64:2 mapping). If you know that you will not have more than 64 threads (or 64 classes of plugins), you may have 64:8
2006 Nov 02
4
reproducible zfs panic on Solaris 10 06/06
Hi, I am able to reproduce the following panic on a number of Solaris 10 06/06 boxes (Sun Blade 150, V210 and T2000). The script to do this is: #!/bin/sh -x uname -a mkfile 100m /data zpool create tank /data zpool status cd /tank ls -al cp /etc/services . ls -al cd / rm /data zpool status # uncomment the following lines if you want to see the system think # it can still read and write to the
2012 Oct 24
0
[LLVMdev] [llvm-commits] ABI: how to let the backend know that an aggregate should be allocated on stack
In llvm-gcc, this decision was handled near llvm-arm.cpp:2737 in llvm_arm_aggregate_partially_passed_in_regs(). Basically, available registers would be counted up and if the HA didn't fit, it went byval instead. I agree that we should unify this sort of logic in one place. I'm not sure that onstack is the best interim step toward that. Does byval work here? Alex On Oct 23, 2012, at