similar to: Help - iSCSI and SAMBA?

Displaying 20 results from an estimated 300 matches similar to: "Help - iSCSI and SAMBA?"

2006 Apr 07
0
How to interpret the output of 'iostat -x /dev/sdb1 20 100' ??
Hi, I'm a newbie to tool 'iostat' and I've read the manual for iostat several times. But it doesn't help. I still get confused with the output of 'iostat', the manual seems too abstract, or high-level, for me. Let's post the output first: avg-cpu: %user %nice %sys %idle 5.70 0.00 3.15 91.15 Device: rrqm/s wrqm/s r/s w/s
2005 Oct 26
1
which process & file taking up disk IO?
I'm having load problems on a server. The bottleneck appears to be disk IO. iostat would show ~100, under %util, during peak usage. i'm running things like clam antivirus, pop, exim, apache, mysql on the server. is there a way to check which process and which file is taking up disk IO? or see what is being written to the disk? i'm very puzzled as the amount of writes is 10 times
2009 Apr 16
2
Weird performance problem
Hi, I'm running a CentOS 4. server and I sometimes face a weird problem. It is a weird performance problem, and here is how I discovered it. This server runs OpenVZ virtual machines, and one of them is an asterisk server for my personal use. The first symptom of the problem is that the voice quality became flaky. So I logged on the server to see what could be eating cpu cycles, when I
2008 Oct 05
1
io writes very slow when using vmware server
We are struggling with a strange problem. When we have some VMWare clients running (mostly MS windows clients), than the IO-write performance on the host becomes very bad. The guest os's do not do anything, just having them started, sitting at the login prompt, is enough to trigger the problem. The host has plenty of 4G of RAM, and all clients fit easily into the space. The disksystem is a
2014 Jun 20
1
iostat results for multi path disks
Here is a sample of running iostat on a server that has a LUN from a SAN with multiple paths. I am specifying a device list that just grabs the bits related to the multi path device: $ iostat -dxkt 1 2 sdf sdg sdh sdi dm-7 dm-8 dm-9 Linux 2.6.18-371.8.1.el5 (db21b.den.sans.org) 06/20/2014 Time: 02:30:23 PM Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await
2011 Oct 09
1
Btrfs High IO-Wait
Hi, I have high IO-Wait on the ods (ceph), the osd are running a v3.1-rc9 kernel. I also experience high IO-rates, around 500IO/s reported via iostat. Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 6.80 0.00 62.40 18.35 0.04 5.29 0.00 5.29 5.29 3.60 sdb
2010 Aug 20
0
awful i/o performance on xen paravirtualized guest
Hi. I'm testing a centos 5.4 xen PV guest on top of a centos 5.4 host. for some reason, the disk performance from the guest is awful. when I do an import , the io is fine for a while then climbs to 100% and stays there most of the time. at first I tougth it was because I was using file-backed disks, so deleted those and changed to LVM, but the situation did't improve. Here's an
2010 Feb 09
3
disk I/O problems with LSI Logic RAID controller
we're having a weird disk I/O problem on a 5.4 server connected to an external SAS storage with an LSI logic megaraid sas 1078. The server is used as a samba file server. Every time we try to copy some large file to the storage-based file system, the disk utilization see-saws up to 100% to several seconds of inactivity, to climb up again to 100% and so forth. Here are a snip from the iostat
2009 Nov 20
3
steadily increasing/high loadavg without i/o wait or cpu utilization
Hi all, I just installed centos 5.4 xen-kernel on intel core i5 machine as dom0. After some hours of syncing a raid10 array (8 sata disk) I noticed a steadily increasing loadavg. I think without reasonable i/o wait or cpu utilization the loadavg on this system should be very lower. If this loadavg is normal I would be greatful if somone could explain why. The screenshots below show that there is
2015 Sep 17
1
poor performance with dom0 on centos7
Am 2015-09-17 09:29, schrieb Pasi K?rkk?inen: > > Are you using nfs over UDP or TCP ? > TCP, but Network cant be the bottleneck, have tested it with iperf between bare metal/domU's and the nfs domU and it was perfectly fast... > > I don't think. > > > If you used NFS over UDP, try running it over TCP. no I use it over TCP... > > What does
2011 Nov 17
4
Dovecot performance issues with many writes
We are currently experiencing performance issue with our Dovecot system which we believe is caused by excessive writes to the dovecot files. The confusing thing is that we are seeing more writes than reads on our Dovecot volume when you would assume that most of the IO should be reads from customers checking their mail. We're seeing reads vs. writes similar to the following: # iostat -d 5 -x
2010 Jan 05
4
Software RAID1 Disk I/O
I just installed CentOS 5.4 64 bit release on a 1.9ghz CPU with 8gB of RAM. It has 2 Western Digital 1.5TB SATA2 drives in RAID1. [root at server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 1.4T 1.4G 1.3T 1% / /dev/md0 99M 19M 76M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm [root at server ~]# Its barebones
2015 Jan 30
4
C6 server responding extremely slow on ssh interactive
Op 29-01-15 om 21:21 schreef Gordon Messmer: > > I haven't seen delays anywhere near that long before, even with heavy swapping. But I guess I'd look at that sort of thing first. > > Run "iostat -x 2" and see if your disks are being fully utilized during the pauses. Run "top" and see if there's anything useful there. Check swap use with
2009 Apr 08
2
maxbw minimum
The minimum is set at 1200Kbits/second. However, in testing, if I set that for a VNIC, the domU gets no traffic at all (maybe the occassional packet). Is the minimum too low? If I set a maximum of 2000Kbits/second, I get this from nicstat (expecting around 250Kbytes/s total: Time Int rKB/s wKB/s rPk/s wPk/s rAvs wAvs %Util Sat 04:35:38 xvm15_0 146.6 5.32 102.0
2009 Apr 08
2
maxbw minimum
The minimum is set at 1200Kbits/second. However, in testing, if I set that for a VNIC, the domU gets no traffic at all (maybe the occassional packet). Is the minimum too low? If I set a maximum of 2000Kbits/second, I get this from nicstat (expecting around 250Kbytes/s total: Time Int rKB/s wKB/s rPk/s wPk/s rAvs wAvs %Util Sat 04:35:38 xvm15_0 146.6 5.32 102.0
2017 Feb 17
0
vm running slowly in powerful host
Hi, i have a vm which has a poor performance. E.g. top needs seconds to refresh its output on the console. Same with netstat. The guest is hosting a MySQL DB with a webfrontend, its response is poor too. I'm looking for the culprit. Following top in the guest i get these hints: Memory is free enough, system is not swapping. System has 8GB RAM and two cpu's. Cpu 0 is struggling with a
2012 Dec 10
8
home directory server performance issues
I?m looking for advice and considerations on how to optimally setup and deploy an NFS-based home directory server. In particular: (1) how to determine hardware requirements, and (2) how to best setup and configure the server. We actually have a system in place, but the performance is pretty bad---the users often experience a fair amount of lag (1--5 seconds) when doing anything on their home
2009 Sep 14
8
10 Node OCFS2 Cluster - Performance
Hi, I am currently running a 10 Node OCFS2 Cluster (version 1.3.9-0ubuntu1) on Ubuntu Server 8.04 x86_64. Linux n1 2.6.24-24-server #1 SMP Tue Jul 7 19:39:36 UTC 2009 x86_64 GNU/Linux The Cluster is connected to a 1Tera iSCSI Device presented by an IBM 3300 Storage System, running over a 1Gig Network. Mounted on all nodes: /dev/sdc1 on /cfs1 type ocfs2
2008 Jan 29
3
Network routes
I am unable to ping NE.TW.RKB.IP1 from an outside network. Other machines which do not have access or routes for NET.WOR.KA.0 respond just fine. How do I get it to respond on both NET.WOR.KA.0 and NE.TW.RKB.0 given all default traffic should go through NET.WOR.KA.1 unless it is in reply to traffic from NE.TW.RKB.1 or there is an outage. [root at host20 ~]# route -n Kernel IP routing table
2009 Aug 26
26
Xen and I/O Intensive Loads
Hi, folks, I''m attempting to run an e-mail server on Xen. The e-mail system is Novell GroupWise, and it serves about 250 users. The disk volume for the e-mail is on my SAN, and I''ve attached the FC LUN to my Xen host, then used the "phy:/dev..." method to forward the disk through to the domU. I''m running into an issue with high I/O wait on the box (~250%)