similar to: LVM performance with snapshots

Displaying 20 results from an estimated 12000 matches similar to: "LVM performance with snapshots"

2007 Aug 22
5
Slow concurrent actions on the same LVM logical volume
Hi 2 all ! I have problems with concurrent filesystem actions on a ocfs2 filesystem which is mounted by 2 nodes. OS=RH5ES and OCFS2=1.2.6 F.e.: If I have a LV called testlv which is mounted on /mnt on both servers and I do a "dd if=/dev/zero of=/mnt/test.a bs=1024 count=1000000" on server 1 and do at the same time a du -hs /mnt/test.a it takes about 5 seconds for du -hs to execute: 270M
2008 Apr 12
2
merge an lvm snapshot back
So how does one accomplish this if say the snap is now deemed the copy of interest? I am hoping dd is not the only answer:) Thanks! jlc
2007 Sep 13
3
3Ware 9550SX and latency/system responsiveness
Dear list, I thought I'd just share my experiences with this 3Ware card, and see if anyone might have any suggestions. System: Supermicro H8DA8 with 2 x Opteron 250 2.4GHz and 4GB RAM installed. 9550SX-8LP hosting 4x Seagate ST3250820SV 250GB in a RAID 1 plus 2 hot spare config. The array is properly initialized, write cache is on, as is queueing (and supported by the drives). StoreSave
2009 Sep 14
8
10 Node OCFS2 Cluster - Performance
Hi, I am currently running a 10 Node OCFS2 Cluster (version 1.3.9-0ubuntu1) on Ubuntu Server 8.04 x86_64. Linux n1 2.6.24-24-server #1 SMP Tue Jul 7 19:39:36 UTC 2009 x86_64 GNU/Linux The Cluster is connected to a 1Tera iSCSI Device presented by an IBM 3300 Storage System, running over a 1Gig Network. Mounted on all nodes: /dev/sdc1 on /cfs1 type ocfs2
2007 May 16
1
Disk accounting with LVM Snapshots
Hi, I was reading a lot of data from one LVM logical volume and wanted to watch the disk statistics with iostat today. To my surprise, I was seeing the same activity on two logical volumes (LV) and the physical disk containing the two. A little investigation showed that one LV was a snapshot of the other, and I was reading from one of the snapshots. While I understand that the two LVs
2020 Jul 03
2
Slow terminal response Centos 7.7 1908
Hey! I have a strange condition in one of the servers that I don't where to start looking. I login to the server via SSH (cant doit any other way) and anything that I type is slow HTTP sessions timeout waiting for screen redraw. So, the server is acting "slow". server is bare metal. no virtual services. no alarms in the disk raid note: server was restarted because of power failure.
2008 Feb 13
6
pvmove speed
Are there any ways to improve/manage the speed of pvmove? Man doesn't show any documented switches for priority scheduling. Iostat shows the system way underutilized even though the lv whose pe's are being migrated is continuously being written (slowly) to. Thanks! jlc
2010 Dec 09
1
Extremely poor write performance, but read appears to be okay
Hello, I'm writing from the otherside of the world from where my systems are, so details are coming in slow. We have a 6TB OCFS2 volume across 20 or so nodes all running OEL5.4 running ocfs2-1.4.4. The system has worked fairly well for the last 6-8 months. Something has happened over the last few weeks which has driven write performance nearly to a halt. I'm not sure how to proceed, and
2020 Jul 03
1
Slow terminal response Centos 7.7 1908
Hi Erick, what was the value of 'si' in top ? Best Regards, Strahil Nikolov ?? 3 ??? 2020 ?. 18:48:30 GMT+03:00, Erick Perez - Quadrian Enterprises <eperez at quadrianweb.com> ??????: >It was found that the software NIC team created in Centos was having >issues due to a failing network cable. The team was going berserk with >up/down changes. > > >On Fri, Jul 3,
2016 May 25
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-05-25 19:13, Kelly Lesperance wrote: > Hdparm didn?t get far: > > [root at r1k1 ~] # hdparm -tT /dev/sda > > /dev/sda: > Timing cached reads: Alarm clock > [root at r1k1 ~] # Hi Kelly, Try running 'iostat -xdmc 1'. Look for a single drive that has substantially greater await than ~10msec. If all the drives except one are taking 6-8msec, but one is very
2007 Oct 18
1
Vista performance (uggh)
Issue: Vista reads slowly from a samba server. This appears to pop up periodically here and elsewhere. My samba.conf file has: [homes] ... vfs objects = readahead As suggested elsewhere. Writes are approximately 17-18MB/s which is acceptable. Reads are in the 8MB/s range which is appalingly slow. Using linux smbclient and windows XP clients I can read at 25+MB/s. I've enabled vfs
2010 Oct 19
2
pdflush kernel thread pops up every 10 seconds or so and video decoding grinds to a halt for 1/2 a second
Hi. A friend of mine was doing real-time video decoding on Fedora Core 13 and he had a performance glitch (1/2 a second freeze) every 5-10 seconds. "top" showed flush-253:0 process at the moment of the freeze. Major device number 253 corresponds to device-mapper. I advised my friend to re-install his FC13 without LVM, to see if the glitch is related to LVM. After re-installing FC13
2007 Apr 01
8
zfs destroy <snapshot> takes hours
Hello, I am having a problem destroying zfs snapshots. The machine is almost not responding for more than 4 hours, after I started the command and I can''t run anything else during that time - I get (bash): fork: Resource temporarily unavailable - errors. The machine is still responding somewhat, but very, very slow. It is: P4, 2.4 GHz with 512 MB RAM, 8 x 750 GB disks as raidZ,
2008 Oct 05
1
io writes very slow when using vmware server
We are struggling with a strange problem. When we have some VMWare clients running (mostly MS windows clients), than the IO-write performance on the host becomes very bad. The guest os's do not do anything, just having them started, sitting at the login prompt, is enough to trigger the problem. The host has plenty of 4G of RAM, and all clients fit easily into the space. The disksystem is a
2015 Jun 24
2
EXT4/LVM recommendations for 3TB of mdbox ?
Hello, Do you have recommendations on EXT4 and LVM options for a 3TB file-system for mdbox? We currently use the mbox format on a XFS with poor performances since the update in v2.1 (Debian). We will switch to EXT4 to have the possibility of shrinking the file-system if needed (which is not possible with XFS), we currently have LVM partitions but with mdbox we will use LVM snapshots to
2016 May 27
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
All of our Kafka clusters are fairly write-heavy. The cluster in question is our second-heaviest ? we haven?t yet upgraded the heaviest, due to the issues we?ve been experiencing in this one. Here is an iostat example from a host within the same cluster, but without the RAID check running: [root at r2k1 ~] # iostat -xdmc 1 10 Linux 3.10.0-327.13.1.el7.x86_64 (r2k1) 05/27/16 _x86_64_ (32 CPU)
2016 May 25
6
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
I?ve posted this on the forums at https://www.centos.org/forums/viewtopic.php?f=47&t=57926&p=244614#p244614 - posting to the list in the hopes of getting more eyeballs on it. We have a cluster of 23 HP DL380p Gen8 hosts running Kafka. Basic specs: 2x E5-2650 128 GB RAM 12 x 4 TB 7200 RPM SATA drives connected to an HP H220 HBA Dual port 10 GB NIC The drives are configured as one large
2011 May 07
7
kswapd taking 100% cpu with no swap on system
Hi All I have xeon server with 16 Gb Ram and no Swap memory.I am running cassandra server on two node in cluster.When there is high load on server kswapd0 kicks inn and take 100% cpu and make machine very slow and we need to restart out cassandra server.I have latest kernel 2.6.18-238.9.1.el5.Please let me know how can i fix this issue .Its hurting us badly this our production server any
2008 Jun 19
3
lvm with iscsi devices on boot
Hi All, My CentOS 5.1 server is using iSCSI attached disks connecting to a dual controller storage array. I have also configured multipathd to manage the multiple paths. Everything works well, and on boot the dev nodes are automatically created in /dev/mapper. On these devices, I have created logical volumes using lvm2. My problem is that lvm does not recognize these iscsi/multipath volumes on
2008 Mar 03
3
LVM and kickstarts ?
Hey, Can anyone tell me why option 1 works and option 2 fails ? I know I need swap and such, however in trouble shooting this issue I trimmed down my config. It fails on trying to format my logical volume, because the mount point does not exist (/dev/volgroup/logvol) It seems that with option 2, the partitions are created and LVM is setup correctly. However the volgroup / logvolume was not