similar to: dm-ioband RPM packages

Displaying 20 results from an estimated 10000 matches similar to: "dm-ioband RPM packages"

2008 Sep 18
2
dm-ioband + bio-cgroup benchmarks
Hi All, I have got excellent results of dm-ioband, that controls the disk I/O bandwidth even when it accepts delayed write requests. In this time, I ran some benchmarks with a high-end storage. The reason was to avoid a performance bottleneck due to mechanical factors such as seek time. You can see the details of the benchmarks at: http://people.valinux.co.jp/~ryov/dm-ioband/hps/ Thanks, Ryo
2008 Sep 18
2
dm-ioband + bio-cgroup benchmarks
Hi All, I have got excellent results of dm-ioband, that controls the disk I/O bandwidth even when it accepts delayed write requests. In this time, I ran some benchmarks with a high-end storage. The reason was to avoid a performance bottleneck due to mechanical factors such as seek time. You can see the details of the benchmarks at: http://people.valinux.co.jp/~ryov/dm-ioband/hps/ Thanks, Ryo
2008 Sep 18
2
dm-ioband + bio-cgroup benchmarks
Hi All, I have got excellent results of dm-ioband, that controls the disk I/O bandwidth even when it accepts delayed write requests. In this time, I ran some benchmarks with a high-end storage. The reason was to avoid a performance bottleneck due to mechanical factors such as seek time. You can see the details of the benchmarks at: http://people.valinux.co.jp/~ryov/dm-ioband/hps/ Thanks, Ryo
2008 Feb 25
0
The I/O bandwidth controller: dm-ioband Performance Report
Hi All, I report new results of dm-ioband bandwidth control test. The previous test results were posted on Jan 25. I've got really good results as well as the last report. dm-ioband works well with Xen virtual disk. I also announce that dm-ioband website has launched. The patches, the manual, the benchmark results and other related information are available through this site. Please check it
2008 Feb 25
0
The I/O bandwidth controller: dm-ioband Performance Report
Hi All, I report new results of dm-ioband bandwidth control test. The previous test results were posted on Jan 25. I've got really good results as well as the last report. dm-ioband works well with Xen virtual disk. I also announce that dm-ioband website has launched. The patches, the manual, the benchmark results and other related information are available through this site. Please check it
2008 Feb 25
0
The I/O bandwidth controller: dm-ioband Performance Report
Hi All, I report new results of dm-ioband bandwidth control test. The previous test results were posted on Jan 25. I've got really good results as well as the last report. dm-ioband works well with Xen virtual disk. I also announce that dm-ioband website has launched. The patches, the manual, the benchmark results and other related information are available through this site. Please check it
2009 Jun 26
1
dm-ioband RPM packages
Hi all, I'm pleased to announce that the new dm-ioband RPM package (v1.12.0) has been released at: http://sourceforge.net/apps/trac/ioband/wiki/dm-ioband dm-ioband provides disk bandwidth control on per partition, per user, per process and per virtual machine (such as KVM or Xen) basis. The RPM packages are for Red Hat Enterprise Linux 5.x and CentOS 5.x. They were tested on RHEL 5.2 and
2009 Jun 26
1
dm-ioband RPM packages
Hi all, I'm pleased to announce that the new dm-ioband RPM package (v1.12.0) has been released at: http://sourceforge.net/apps/trac/ioband/wiki/dm-ioband dm-ioband provides disk bandwidth control on per partition, per user, per process and per virtual machine (such as KVM or Xen) basis. The RPM packages are for Red Hat Enterprise Linux 5.x and CentOS 5.x. They were tested on RHEL 5.2 and
2009 Jun 26
1
dm-ioband RPM packages
Hi all, I'm pleased to announce that the new dm-ioband RPM package (v1.12.0) has been released at: http://sourceforge.net/apps/trac/ioband/wiki/dm-ioband dm-ioband provides disk bandwidth control on per partition, per user, per process and per virtual machine (such as KVM or Xen) basis. The RPM packages are for Red Hat Enterprise Linux 5.x and CentOS 5.x. They were tested on RHEL 5.2 and
2008 Nov 08
0
No subject
and nice. My concern is "bio_cgroup_id". It's provided only for bio_cgroup. In this summer, I tried to add swap_cgroup_id only for mem+swap controller but commenters said "please provide "id and lookup" in cgroup layer, it should be useful." And I agree them. (and postponed it ;) Could you try "id" in cgroup layer ? How do you think, Paul and others ?
2008 Nov 08
0
No subject
and nice. My concern is "bio_cgroup_id". It's provided only for bio_cgroup. In this summer, I tried to add swap_cgroup_id only for mem+swap controller but commenters said "please provide "id and lookup" in cgroup layer, it should be useful." And I agree them. (and postponed it ;) Could you try "id" in cgroup layer ? How do you think, Paul and others ?
2008 Nov 13
6
[PATCH 0/8] I/O bandwidth controller and BIO tracking
Hi everyone, This is a new release of dm-ioband and bio-cgroup. With this release, the overhead of bio-cgroup is significantly reduced and the accuracy of block I/O tracking is much improved. These patches are for 2.6.28-rc2-mm1. Enjoy it! dm-ioband ========= Dm-ioband is an I/O bandwidth controller implemented as a device-mapper driver, which gives specified bandwidth to each job running on
2008 Nov 13
6
[PATCH 0/8] I/O bandwidth controller and BIO tracking
Hi everyone, This is a new release of dm-ioband and bio-cgroup. With this release, the overhead of bio-cgroup is significantly reduced and the accuracy of block I/O tracking is much improved. These patches are for 2.6.28-rc2-mm1. Enjoy it! dm-ioband ========= Dm-ioband is an I/O bandwidth controller implemented as a device-mapper driver, which gives specified bandwidth to each job running on
2008 Nov 13
6
[PATCH 0/8] I/O bandwidth controller and BIO tracking
Hi everyone, This is a new release of dm-ioband and bio-cgroup. With this release, the overhead of bio-cgroup is significantly reduced and the accuracy of block I/O tracking is much improved. These patches are for 2.6.28-rc2-mm1. Enjoy it! dm-ioband ========= Dm-ioband is an I/O bandwidth controller implemented as a device-mapper driver, which gives specified bandwidth to each job running on
2008 Apr 24
1
Utility tool for dm-ioband.
Hi everyone, I made a utility tool for dm-ioband version 0.0.4, named iobandctl. It enables you to easily apply I/O bandwidth control to an entire disk, and manage it. It helps you set the percentage of bandwidth to give each partition, and each user, process, group, or cgroup. (You are not able to use cgroup support yet, because the dm-ioband patch to enable cgroup support
2008 Apr 24
1
Utility tool for dm-ioband.
Hi everyone, I made a utility tool for dm-ioband version 0.0.4, named iobandctl. It enables you to easily apply I/O bandwidth control to an entire disk, and manage it. It helps you set the percentage of bandwidth to give each partition, and each user, process, group, or cgroup. (You are not able to use cgroup support yet, because the dm-ioband patch to enable cgroup support
2008 Apr 24
1
Utility tool for dm-ioband.
Hi everyone, I made a utility tool for dm-ioband version 0.0.4, named iobandctl. It enables you to easily apply I/O bandwidth control to an entire disk, and manage it. It helps you set the percentage of bandwidth to give each partition, and each user, process, group, or cgroup. (You are not able to use cgroup support yet, because the dm-ioband patch to enable cgroup support
2008 Feb 29
1
I/O bandwidth control on KVM
Hello all, I've implemented a block device which throttles block I/O bandwidth, which I called dm-ioband, and been trying to throttle I/O bandwidth on KVM environment. But unfortunately it doesn't work well, the number of issued I/Os is not according to the bandwidth setting. On the other hand, I got the good result when accessing directly to the local disk on the local machine. I'm
2008 Feb 29
1
I/O bandwidth control on KVM
Hello all, I've implemented a block device which throttles block I/O bandwidth, which I called dm-ioband, and been trying to throttle I/O bandwidth on KVM environment. But unfortunately it doesn't work well, the number of issued I/Os is not according to the bandwidth setting. On the other hand, I got the good result when accessing directly to the local disk on the local machine. I'm
2008 Apr 28
1
Kickstart syntax for CentOS upgrade
I'd like to automate the upgrade from CentOS 4.6 to 5.1 as much as possible. Since upgrades per se are not really recommended, I'm planning to do a kickstart installation. However, I want to leave one of the existing partitions (/scratch) untouched during the installation. Here is my current layout (LogVol00 is swap so not shown in the df output below): # df -hl