Displaying 7 results from an estimated 7 matches for "read_ahead".
2011 Oct 11
5
[PATCH] libxl: reimplement buffer for bootloading and drop data if buffer is full
..._bootloader_args(libxl__gc *gc,
libxl_domain_build_info *info,
@@ -165,10 +167,11 @@ static pid_t fork_exec_bootloader(int *m
*/
static char * bootloader_interact(libxl__gc *gc, int xenconsoled_fd, int bootloader_fd, int fifo_fd)
{
- int ret;
+ int ret, read_ahead, timeout = 0;
size_t nr_out = 0, size_out = 0;
char *output = NULL;
+ struct timeval wait;
/* input from xenconsole. read on xenconsoled_fd write to bootloader_fd */
int xenconsoled_prod = 0, xenconsoled_cons = 0;
@@ -177,6 +180,10 @@ static char * bootloader_interact(lib...
2009 Sep 24
1
xen & iSCSI
...ance with simple dd tests (~100MB/s both reading and
writing).
I then use the block devices for the domU and if I repeat the same dd
test
from within the domU the write performace is still good (~100MB/s), but
the read performance is cut in half (~55MB/s).
I tried changing several parameters like read_ahead and such, but I can
not obtain a good read performance in the domU.
Any idea?
Thanks, Daniel.
2004 Dec 22
4
About block device mapping for guests
I am a bit puzzled on how the block device mapping works for Xen
guests. Or particularily the fact on how the device shows on the guest
side. When configuring the domain, one can specify the device name the
partition should appear with inside the guest. And if that device does
not exist on the host side, xend complains. A workaround for this was
to specify the device with explicit minor and major
2013 Jan 10
0
[PATCH] in.tftpd: Allow chdir w/o root, improve I/O
.... I changed
the read() to an fread() (to take advantage of stdio buffering), and added
a setvbuf() call after the file is fdopen()ed to set a 64kB buffer. The
server now reads files in 64kB chunks, according to strace, and throughput
is much improved. (Originally, I was going to modify readit()/read_ahead()
to use multiple buffers instead of just two, but this code is intertwined
with the server mechanics in a way that makes leaning on stdio a *lot*
simpler.)
Lastly, there were a couple of minor nits: the "toplevel" variable being
defined twice, and a trailing comma in the long_only_o...
2002 Apr 25
1
Re: Problems with ext3 fs
...v/hda2 on /boot type ext2 (rw,noexec,nosuid,nodev)
/dev/md/5 on /var type ext3 (rw,nosuid,nodev,usrquota,grpquota)
/dev/md/6 on /home type ext3 (rw,nosuid,nodev,usrquota,grpquota)
/dev/md/7 on /tmp type ext3 (rw,nosuid,nodev,usrquota,grpquota)
hdn@aurora:~$ cat /proc/mdstat
Personalities : [raid5]
read_ahead 1024 sectors
md1 : active raid5 ide/host2/bus1/target0/lun0/part1[2]
ide/host2/bus0/target0/lun0/part1[3]
ide/host0/bus1/target0/lun0/part1[0] ide/host0/bus0/target0/lun0/part1[1]
1023744 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU]
md3 : active raid5 ide/host2/bus1/target0/lun0/par...
2002 Feb 28
5
Problems with ext3 fs
Hi,
Apologies, this is going to be quite long - I'm going to provide as much
info as possible.
I'm running a system with ext3 fs on software RAID. The RAID set-up is as
shown below:
jlm@nijinsky:~$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5]
read_ahead 1024 sectors
md0 : active raid1 hdc1[1] hda1[0]
96256 blocks [2/2] [UU]
md5 : active raid1 hdk1[1] hde1[0]
976640 blocks [2/2] [UU]
md6 : active raid1 hdk5[1] hde5[0]
292672 blocks [2/2] [UU]
md7 : active raid1 hdk6[1] hde6[0]
1952896 blocks [2/2] [UU]
md8 : active raid1...
2010 Mar 10
39
SSD Optimizations
I''m looking to try BTRFS on a SSD, and I would like to know what SSD
optimizations it applies. Is there a comprehensive list of what ssd
mount option does? How are the blocks and metadata arranged? Are there
options available comparable to ext2/ext3 to help reduce wear and
improve performance?
Specifically, on ext2 (journal means more writes, so I don''t use ext3 on
SSDs,