similar to: Vista performance (uggh)

Displaying 20 results from an estimated 400 matches similar to: "Vista performance (uggh)"

2009 Oct 20
1
ocfs2 - problem with performance
Hi all. I have problem with performance ocfs2. I just instaled ocfs2 1.4.1 in 2 nodes cluster. I use ocfs2 in mail server. This system has large amount of small files about 50kB. Ocfs2 is formated: >> mkfs.ocfs2 -T mail -N 2 /dev/sdb1 mkfs.ocfs2 1.4.1 Cluster stack: classic o2cb Overwriting existing ocfs2 partition. Proceed (y/N): y Filesystem Type of mail Filesystem label=
2008 Mar 29
1
Help in troubleshoot cause of high kernel activity
Hi, I had been experiencing a problem on our dedicated server running Centos 5, and unable to successfully track down the problem. Since about 6 days ago, I noticed a spike in load/CPU utilization which went from a typical 0.2x-0.3x to 3.x. At the same time, average traffic also went up and so did the log usage. Prior to this, the server was working fine and there had been no changes to the
2017 Feb 10
1
dovecot config for 1500 simultaneous connection
----- Original Message ----- From: Christian Balzer [mailto:chibi at gol.com] To: dovecot at dovecot.org Cc: 24x7server at 24x7server.net Sent: Fri, 10 Feb 2017 17:58:58 +0900 Subject: On Fri, 10 Feb 2017 01:13:20 +0530 Rajesh M wrote: > hello > > could somebody with experience let me know the dovecot config file settings to handle around 1500 simultaneous connections over pop3 and
2002 Sep 24
3
Samba performance issues
Hi all We are implementing samba-ldap to act as an nt pdc and are seeing performance problems. We have a 1ghz, 3gb Ram, 36gb box that is running samba-2.2.5 and openldap-2.0.23 under redhat 7.3 with kernel 2.4.18-3. Clients are all Win2k SP3. All the ldap requests are to the localhost interface. The box is acting as the PDC for the domain, and also sharing diskspace and printers. When we get
2008 Mar 28
1
bwlimit on rsync locally
Does "bwlimit" option really work on rsync locally? We have one type of harddisk and want to slow down rsync I/O on disk because I don't want the disk head gets too hot. While I'm trying to use --bwlimit option, it looks the rsync speed was slowed down, but iostat is not improved at all. In both case the block written speed is increased by the same amount. How could I really
2004 Apr 20
0
Re: ocfs performance question
Well, you are the second person who has complained about the performance of OCFS with the PERC controller. Best option would be to contact DELL. We are in contact with them... as in, they inform us of any issues they have with linux/ocfs on their hardware. As we do not have this particular hardware inhouse, all we can only speculate as to what the issue is. Things to look for ==> output
2009 Sep 21
2
Question about iostat output
Hello, We are planning to moving most of our servers to ESX but before buying our SAN, we want to do some I/O stats to see if iSCSI is enough or if we have to go with FC. So I found a plugin for Nagios that can log I/O stats with iostat. So far it's fine with single disk/one partition servers, but on our Oracle Database 10g server, we have two drives in RAID 1 (/dev/sda) and 4 other
2004 Dec 02
3
Tbench benchmark numbers seem to be limiting samba performance in the 2.4 and 2.6 kernel.
Hi, I'm getting horrible performance on my samba server, and I am unsure of the cause after reading, benchmarking, and tuning. My server is a K6-500 with 43MB of RAM, standard x86 hardware. The OS is Slackware 10.0 w/ 2.6.7 kernel I've had similar problems with the 2.4.26 kernel. I use samba version 3.0.5. I've listed my partitions below, as well as the drive models. I have a
2019 May 11
2
[nbdkit PATCH] cache: Reduce use of bounce-buffer
Although the time spent in memcpy/memset probably pales in comparison to time spent in socket I/O, it's still worth worth reducing the number of times we have to utilize a bounce buffer when we already have aligned data. Signed-off-by: Eric Blake <eblake@redhat.com> --- filters/cache/cache.c | 60 ++++++++++++++++++++++++++++--------------- 1 file changed, 39 insertions(+), 21
2018 Jan 21
2
Re: [PATCH nbdkit] filters: Add copy-on-write filter.
Here's the patch (on top of the preceeding one) which uses a bitmap instead of SEEK_DATA. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com libguestfs lets you edit virtual machines. Supports shell scripting, bindings from many languages. http://libguestfs.org --4VrXvz3cwkc87Wze
2010 Jul 05
21
Aoe or iScsi???
Hi people... Here we use Xen 4 with Debian Lenny... We''re using kernel 2.6.31.13 pvops... As a storage system, we use AoE devices... So, we installed VM''s on AoE partition... The "NAS" server is a Intel based baremetal with SATA hard disc... However, sometime I feeling that VM''s is so slow... Also, all VM has GPLPV drivers installed... So, I am thing about
2002 Feb 28
5
Problems with ext3 fs
Hi, Apologies, this is going to be quite long - I'm going to provide as much info as possible. I'm running a system with ext3 fs on software RAID. The RAID set-up is as shown below: jlm@nijinsky:~$ cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid5] read_ahead 1024 sectors md0 : active raid1 hdc1[1] hda1[0] 96256 blocks [2/2] [UU] md5 : active raid1 hdk1[1] hde1[0]
2019 May 13
0
[nbdkit PATCH v2 2/2] cache, cow: Reduce use of bounce-buffer
Although the time spent in memcpy/memset probably pales in comparison to time spent in socket I/O, it's still worth worth reducing the number of times we have to utilize a bounce buffer when we already have aligned data. Note that blocksize, cache, and cow all do block fragmentation and bounce-buffer alignment; this brings the logic in cache and cow (which were copied from one another) more
2019 May 13
3
[nbdkit PATCH v2 0/2] Bounce buffer cleanups
Based on Rich's review of my v1 that touched only cache.c, I have now tried to bring all three filters with alignment rounding in line with one another. There is definitely room for future improvements once we teach nbdkit to let filters and plugins advertise block sizes, but I'm hoping to get NBD_CMD_CACHE implemented first. Eric Blake (2): blocksize: Process requests in linear order
2018 Dec 28
0
[PATCH nbdkit 5/9] cache: Allow this filter to serve requests in parallel.
Make the implicit lock explicit, and hold it around blk_* operations. This allows us to relax the thread model for the filter to NBDKIT_THREAD_MODEL_PARALLEL. --- filters/cache/blk.h | 7 ++++++ filters/cache/cache.c | 57 +++++++++++++++++++++++++++++++------------ 2 files changed, 49 insertions(+), 15 deletions(-) diff --git a/filters/cache/blk.h b/filters/cache/blk.h index 24bf6a1..ab9134e
2019 Apr 24
0
[nbdkit PATCH 4/4] filters: Check for mutex failures
Commit 975dab14 argued that for simple lock/unlock sequences, it was easier to avoid the cleanup.h macros. But since that time, we added additional sanity checking to the macros, at which point the boilerplate of inlining that sanity checking is outweighed compared to just using the macros in more places. Signed-off-by: Eric Blake <eblake@redhat.com> --- filters/cache/cache.c | 23
2018 Feb 01
0
[nbdkit PATCH v2 1/3] backend: Rework internal/filter error return semantics
Previously, we let a plugin set an error in either thread-local storage (nbdkit_set_error()) or errno, then connections.c would decode which error to use. But with filters in the mix, it is very difficult for a filter to know what error was set by the plugin (particularly since nbdkit_set_error() has no public counterpart for reading the thread-local storage). What's more, if a filter does
2019 Apr 01
1
Readahead in the nbdkit curl plugin
I'm trying to replicate the features of the qemu curl plugin in nbdkit's curl plugin, in order that we can use nbdkit in virt-v2v to access VMware servers. I've implemented everything else so far [not posted yet] except for readahead. To my surprise actually, qemu's curl driver implements readahead itself. I thought it was a curl feature. I'm not completely clear _how_ it
2019 May 13
0
Re: [nbdkit PATCH] cache: Reduce use of bounce-buffer
On Sat, May 11, 2019 at 03:30:04PM -0500, Eric Blake wrote: > Although the time spent in memcpy/memset probably pales in comparison > to time spent in socket I/O, it's still worth worth reducing the > number of times we have to utilize a bounce buffer when we already > have aligned data. > > Signed-off-by: Eric Blake <eblake@redhat.com> > --- >
2019 Apr 24
7
[nbdkit PATCH 0/4] More mutex sanity checking
I do have a question about whether patch 2 is right, or whether I've exposed a bigger problem in the truncate (and possibly other) filter, but the rest seem fairly straightforward. Eric Blake (4): server: Check for pthread lock failures truncate: Factor out reading real_size under mutex plugins: Check for mutex failures filters: Check for mutex failures filters/cache/cache.c