search for: setra

Displaying 18 results from an estimated 18 matches for "setra".

Did you mean: seta
2014 Nov 23
0
[PATCH 2/3] New API: guestfs_blockdev_setra: Adjust readahead for filesystems and devices.
This adds a binding for 'blockdev --setra', allowing you to adjust the readahead parameter for filesystems and devices. --- daemon/blockdev.c | 30 ++++++++++++++++++++---------- generator/actions.ml | 14 ++++++++++++++ 2 files changed, 34 insertions(+), 10 deletions(-) diff --git a/daemon/blockdev.c b/daemon/blockdev.c index 8a7...
2009 Jan 15
2
3Ware 9650SE tuning advice
...set up two volumes (one for each card); one RAID6 and one RAID5. I used the default 64K block size and am trying various filesystems in tandem with it. I stumbled across the following recommendations on 3Ware's site: echo "64" > /sys/block/sda/queue/max_sectors_kb blockdev --setra 16384 /dev/sda echo "512" > /sys/block/sda/queue/nr_requests But am wondering if there are other things I should be looking at, including changing the IO scheduler. Any particular options I should use with filesystem creation to match up with my RAID block size? I also noted that...
2014 Nov 23
7
[PATCH 0/3] patches needed for virt-bmap
See http://rwmj.wordpress.com/2014/11/23/mapping-files-to-disk/
2006 Apr 14
1
Ext3 and 3ware RAID5
I run a decent amount of 3ware hardware, all under centos-4. There seems to be some sort of fundamental disagreement between ext3 and 3ware's hardware RAID5 mode that trashes write performance. As a representative example, one current setup is 2 9550SX-12 boards in hardware RAID5 mode (256KB stripe size) with a software RAID0 stripe on top (also 256KB chunks). bonnie++ results look
2007 Jun 29
2
poor read performance
I am seeing what seems to be a notable limit on read performance of an ext3 filesystem. If anyone could offer some insight it would be helpful. Background: 12 x 500G SATA disks in a Hardware RAID enclosure connected via 2Gb/s FC to a 4 x 2.6 Ghz system with 4GB ram running RHEL4.5. Initially the enclosure was configured RAID5 10+1 parity, although I've also tried RAID 50 and currently RAID 0.
2020 May 15
2
CentOS7 and NFS
...mbo frames (if yes, you should have them on clients and server)? You might think about disabling flow control on the switch and on the network card. Are there a lot of dropped packets? For network tuning, check http://fasterdata.es.net/host-tuning/linux/ Did you try to enable readahead (blockdev ?setra) on the filesystem? On the client side, changing the mount options helps. The default read/write block size is quite little, increase it (rsize, wsize), and use noatime. Cheers, Barbara > On 15 May 2020, at 09:26, Patrick B?gou <Patrick.Begou at legi.grenoble-inp.fr> wrote: > &...
2014 Nov 24
2
[PATCH v2 0/2] patches needed for virt-bmap
Does *not* incorporate changes suggested by Pino yet. Rich.
2007 Sep 13
3
3Ware 9550SX and latency/system responsiveness
...er from https://bugzilla.redhat.com/show_bug.cgi?id=121434#c275). These are the default settings: /sys/block/sda/device/queue_depth = 254 /sys/block/sda/queue/nr_requests = 8192 /proc/sys/vm/dirty_expire_centisecs = 3000 /proc/sys/vm/dirty_ratio = 30 3Ware mentions elevator=deadline, blockdev --setra 16384 along with nr_requests=512 in their performance tuning doc - these alone seem to make no difference to the latency problem. Setting dirty_expire_centisecs = 1000 and dirty_ratio = 5 does indeed reduce the number of processes in 'b' state as reported by vmstat 1 during an iozone b...
2006 Oct 20
0
3ware 9550SXU-4LP performance
...ance, for obvious reasons) * used noirqbalance parameter to prevent "nobody cared" messages related to usb irqs * use xfs (much faster than ext3) * mv /lib/tls /lib/tls.disabled mv /lib/tls64 /lib64/tls.disabled ---> remarkable performance boost!(!) * disable Queuing * blockdev --setra 16384 * ask 3ware support * use the newest driver for the controller (2.26.04.010) * update firmware to FE9X 3.04.01.011 * use modules for the scsi subsys My setup: * Supermicro X6DH8-G2+, 4GB, 2+3.6Ghz * use Bus Width, 64 bits, use Bus Speed 133 Mhz (I haven''t noticed any difference betw...
2013 Feb 18
1
btrfs send & receive produces "Too many open files in system"
I believe what I am going to write is a bug report. When I finaly did # btrfs send -v /mnt/adama-docs/backups/20130101-192722 | btrfs receive /mnt/tmp/backups to migrate btrfs from one partition layout to another. After a while system keeps saying that "Too many open files in system" and denies access to almost every command line tool. When I had access to iostat I confirmed the
2020 May 16
0
CentOS7 and NFS
..., you should have them on clients and server)? You might think about disabling flow control on the switch and on the network card. Are there a lot of dropped packets? > > For network tuning, check http://fasterdata.es.net/host-tuning/linux/ > > Did you try to enable readahead (blockdev ?setra) on the filesystem? > > On the client side, changing the mount options helps. The default read/write block size is quite little, increase it (rsize, wsize), and use noatime. > > > Cheers, > Barbara > > > > > >> On 15 May 2020, at 09:26, Patrick B?gou <Patri...
2005 Oct 31
4
Best mkfs.ext2 performance options on RAID5 in CentOS 4.2
I can't seem to get the read and write performance better than approximately 40MB/s on an ext2 file system. IMO, this is horrible performance for a 6-drive, hardware RAID 5 array. Please have a look at what I'm doing and let me know if anybody has any suggestions on how to improve the performance... System specs: ----------------- 2 x 2.8GHz Xeons 6GB RAM 1 3ware 9500S-12 2 x 6-drive,
2020 May 13
2
CentOS7 and NFS
Le 13/05/2020 ? 07:32, Simon Matter via CentOS a ?crit?: >> Le 12/05/2020 ? 16:10, James Pearson a ?crit?: >>> Patrick B?gou wrote: >>>> Hi, >>>> >>>> I need some help with NFSv4 setup/tuning. I have a dedicated nfs server >>>> (2 x E5-2620? 8cores/16 threads each, 64GB RAM, 1x10Gb ethernet and 16x >>>> 8TB HDD) used by two
2006 Oct 01
4
3Ware 9550SX-4LP Performance
I know there are a few 3Ware fans here and I was hoping to find some help. I just built a new server using a 3Ware 9550SX-4LP with four disks in raid 5. The array is fully initialized but I'm not getting the write performance I was hoping for -- only 40 to 45MB/Sec. 3Ware's site advertises 300MB/Sec writes using 8 disks on the PCI Express version of this card (the 9580 I think.)
2013 Feb 27
1
Slow read performance
Help please- I am running 3.3.1 on Centos using a 10GB network. I get reasonable write speeds, although I think they could be faster. But my read speeds are REALLY slow. Executive summary: On gluster client- Writes average about 700-800MB/s Reads average about 70-80MB/s On server- Writes average about 1-1.5GB/s Reads average about 2-3GB/s Any thoughts? Here are some additional details:
2015 Jul 21
0
ANNOUNCE: libguestfs 1.30 released
...u-img will crash and the crash is reported back to libguestfs callers as an error message. API New APIs guestfs_add_libvirt_dom This exposes a previously private API that allows you to pass a virDomainPtr object directly from libvirt to libguestfs. guestfs_blockdev_setra Adjust readahead parameter for devices. See blockdev --setra command. guestfs_btrfs_balance guestfs_btrfs_balance_cancel guestfs_btrfs_balance_pause guestfs_btrfs_balance_resume guestfs_btrfs_balance_status Balance support for Btrfs filesystems (Hu Tao)....
2012 Feb 07
1
Recommendations for busy static web server replacement
Hi all after being a silent reader for some time and not very successful in getting good performance out of our test set-up, I'm finally getting to the list with questions. Right now, we are operating a web server serving out 4MB files for a distributed computing project. Data is requested from all over the world at a rate of about 650k to 800k downloads a day. Each data file is usually
2005 Apr 15
16
Serial ATA hardware raid.
Hi everyone, I'm looking into setting up a SATA hardware raid, probably 5 to use with CentOS 4. I chose hardware raid over software mostly because I like the fact that the raid is transparent to the OS. Does anyone know of any SATA controllers that are well tested for this sort of usage? From what I can tell from googling, this is more or less where RHEL stands: Red Hat Enterprise Linux