search for: disksizes

Displaying 20 results from an estimated 21 matches for "disksizes".

Did you mean: disksize
2016 Sep 26
2
Memory corruption when testing nbdkit python plugin with nbd-tester-client?
Hi, has anyone ever run "make check" from nbd against nbdkit with a python plugin? I usually get segfaults during such a run, and sometimes various other errors happen before the segfault, suggesting that some memory corruption is underway. AFAICS a pure python plugin should not be able to cause memory corruption. Examples of nbdkit logs for running "make check" or subsets of
2016 Sep 26
2
Re: Memory corruption when testing nbdkit python plugin with nbd-tester-client?
On 26.09.2016 14:29, Richard W.M. Jones wrote: > On Mon, Sep 26, 2016 at 02:18:02PM +0200, Carl-Daniel Hailfinger wrote: >> Hi, >> >> has anyone ever run "make check" from nbd against nbdkit with a python >> plugin? I usually get segfaults during such a run, and sometimes various >> other errors happen before the segfault, suggesting that some memory
2016 Sep 26
0
Re: Memory corruption when testing nbdkit python plugin with nbd-tester-client?
On 26.09.2016 19:20, Carl-Daniel Hailfinger wrote: > On 26.09.2016 14:29, Richard W.M. Jones wrote: >> On Mon, Sep 26, 2016 at 02:18:02PM +0200, Carl-Daniel Hailfinger wrote: >>> Hi, >>> >>> has anyone ever run "make check" from nbd against nbdkit with a python >>> plugin? I usually get segfaults during such a run, and sometimes various
2016 Sep 26
0
Re: Memory corruption when testing nbdkit python plugin with nbd-tester-client?
On Mon, Sep 26, 2016 at 02:18:02PM +0200, Carl-Daniel Hailfinger wrote: > Hi, > > has anyone ever run "make check" from nbd against nbdkit with a python > plugin? I usually get segfaults during such a run, and sometimes various > other errors happen before the segfault, suggesting that some memory > corruption is underway. > AFAICS a pure python plugin should not be
2011 Mar 07
1
diskspace and diskinodes tag for openvz
Hi! as far as I understood from "xml format for openvz driver" thread available at [1] it should be possible to specify via libvirt disk size and disk inodes for openvz VM.But the following device section in VM xml description doesn't set disksize and diskinodes properly (it looks like those parameters are taken from default OpenVZ config and not as they are specified in
2002 Jun 23
1
Using MTOOLS in place of loopback mounting
I am trying to build a distribution without having to be root. The SYSLINUX installer typically needs root to do loopback mounting, but if you have MTOOLS, you can use that instead. This seems to work so far: #!/bin/sh # A minimal replacement for /usr/bin/syslinux; assumes that ldlinux.sys # and ldlinux.bss are available in the current directory. You must # specify locations of mkdosfs and
2011 Aug 06
4
[PATCH] ifmemdsk.c32: Allow boot options based on presence of MEMDISK
Below, attached, and available at the 'ifmemdsk' branch at: http://git.zytor.com/?p=users/sha0/syslinux.git;a=commitdiff;h=a975c12919bbd48739fede4ebfe099d98b87192e Review welcome! - Shao Miller ----- From a975c12919bbd48739fede4ebfe099d98b87192e Mon Sep 17 00:00:00 2001 From: Shao Miller <shao.miller at yrdsb.edu.on.ca> Date: Sat, 6 Aug 2011 05:24:46 -0400 Subject: [PATCH]
2004 Dec 21
1
Samba panics on disk size and connection is lost while copting files.
I have moved a Samba installation from an old samba 2.2.8a (on a 2.4.21 kernel) to a new server running Samba 3.0.10 on a SuSE 9.2 (kernel 2.6.8) and I now have a problem using the (same) shares from client W2K machines. When I open "My Computer" window on a client, the drives are marked by a red cross, as if they were disconnected, and the disksize is zero. I can decend into the
2016 Mar 11
0
[PATCH v1 19/19] zram: use __GFP_MOVABLE for memory allocation
Zsmalloc is ready for page migration so zram can use __GFP_MOVABLE from now on. I did test to see how it helps to make higher order pages. Test scenario is as follows. KVM guest, 1G memory, ext4 formated zram block device, for i in `seq 1 8`; do dd if=/dev/vda1 of=mnt/test$i.txt bs=128M count=1 & done wait `pidof dd` for i in `seq 1 2 8`; do rm -rf mnt/test$i.txt done
2005 Jan 07
0
Re: syslinux] syslinux vs grub
...usefull for removable media like diskettes, Zipdisks, USB Flash keys, etc.. I don't know if the fact that Syslinux is now installed using blockmapping is really breaking the usefullness for harddisks. You already mentioned system attribute will protect from defragmentation tools, but how about disksizes? Isolinux and PXElinux will remain unreplaceable, so it would be most interesting to see most development in every component that is not Syslinux itself. PXElinux, Isolinux, Memdisk, Menu, Sample programs. Since 3.10preX, Memdisk's INITRD can take multiple files. However, can it also take a f...
2007 Aug 28
2
memdisk patch
Hello all, I ran across a couple bugs in memdisk concerning hd images the other day. With the attached patch I've been able to successfully boot a 32MB gzipped hd image. (I'll put the image up for a short while at ftp://ftp.io.com/pub/usr/duanev/fdoshd.img.gz) The pxelinux.cfg entry I'm using is: label fdos kernel memdisk append keeppxe initrd=fdoshd.img.gz
2020 Sep 01
10
remove revalidate_disk()
Hi Jens, this series removes the revalidate_disk() function, which has been a really odd duck in the last years. The prime reason why most people use it is because it propagates a size change from the gendisk to the block_device structure. But it also calls into the rather ill defined ->revalidate_disk method which is rather useless for the callers. So this adds a new helper to just
2009 Dec 07
3
[PATCH] memdisk: "safe hook" and mBFT
Two additions to MEMDISK to support OS drivers. The "safe hook" structure ("Safe Master Boot Record INT 13h Hook Routines") is a means for an OS driver to follow a chain of INT 13h hooks, examining the hooks'' vendors and assuming responsibility for hook functionality along the way. For MEMDISK, we guarantee an additional field which holds the physical address for the
2009 Apr 23
11
Puppet on busybox, Bob Hope or No Hope?
When I say busybox it''s actually VMware ESX server which seems to use busybox (which I guess is the case a number of other software appliances) . Reason for wanting to install puppet is to run the cli tools to create nightly vmware snapshots. I''m happy to give it a go (and add the docs to the wiki) but I''m not too sure at this stage how big a task this might be and what,
2011 Sep 06
17
ext4 BUG in dom0 Kernel 2.6.32.36
Hi: I''ve met an ext4 Bug in dom0 kernel 2.6.32.36. (See kernel stack below) 32.36 kernel commit: http://git.kernel.org/?p=linux/kernel/git/jeremy/xen.git;a=commit;h=ae333e97552c81ab10395ad1ffc6d6daaadb144a The bug only show up in our cluster environments which includes 300 physical machines, one server will run into this bug per day. Running ontop of every server, there are about 30
2016 Mar 11
31
[PATCH v1 00/19] Support non-lru page migration
Recently, I got many reports about perfermance degradation in embedded system(Android mobile phone, webOS TV and so on) and failed to fork easily. The problem was fragmentation caused by zram and GPU driver pages. Their pages cannot be migrated so compaction cannot work well, either so reclaimer ends up shrinking all of working set pages. It made system very slow and even to fail to fork easily.
2016 Mar 11
31
[PATCH v1 00/19] Support non-lru page migration
Recently, I got many reports about perfermance degradation in embedded system(Android mobile phone, webOS TV and so on) and failed to fork easily. The problem was fragmentation caused by zram and GPU driver pages. Their pages cannot be migrated so compaction cannot work well, either so reclaimer ends up shrinking all of working set pages. It made system very slow and even to fail to fork easily.
2016 Mar 21
22
[PATCH v2 00/18] Support non-lru page migration
Recently, I got many reports about perfermance degradation in embedded system(Android mobile phone, webOS TV and so on) and failed to fork easily. The problem was fragmentation caused by zram and GPU driver pages. Their pages cannot be migrated so compaction cannot work well, either so reclaimer ends up shrinking all of working set pages. It made system very slow and even to fail to fork easily.
2016 Mar 21
22
[PATCH v2 00/18] Support non-lru page migration
Recently, I got many reports about perfermance degradation in embedded system(Android mobile phone, webOS TV and so on) and failed to fork easily. The problem was fragmentation caused by zram and GPU driver pages. Their pages cannot be migrated so compaction cannot work well, either so reclaimer ends up shrinking all of working set pages. It made system very slow and even to fail to fork easily.
2016 Mar 30
33
[PATCH v3 00/16] Support non-lru page migration
Recently, I got many reports about perfermance degradation in embedded system(Android mobile phone, webOS TV and so on) and failed to fork easily. The problem was fragmentation caused by zram and GPU driver pages. Their pages cannot be migrated so compaction cannot work well, either so reclaimer ends up shrinking all of working set pages. It made system very slow and even to fail to fork easily.