similar to: heavy IO load when working with sparse files (centos 6.4)

Displaying 20 results from an estimated 5000 matches similar to: "heavy IO load when working with sparse files (centos 6.4)"

2012 Mar 07
1
copy file from host to live guest (speed)
On Wed, Mar 07, 2012 at 05:57:45AM -0800, THO HUYNH wrote: > I tried to copy file from host to the running guest after I had > mounted the guest but it`s seemed slow. The speed is about 6-8 > MB/s. I thought it would be the same with real hard drive (about > 20MB/s). Is this using 'guestmount --live'? Unfortunately FUSE is inefficient, particularly the way we implement it in
2010 Aug 07
13
PowerEdge R510 with PERC H200/H700 with ZFS
Anyone have any experience with a R510 with the PERC H200/H700 controller with ZFS? My perception is that Dell doesn''t play well with OpenSolaris. Thanks, Geoff
2012 Feb 06
0
[PATCH] Btrfs-progs: make scrub IO priority configurable
The btrfs tool is changed in order to support command line parameters to configure the IO priority of the scrub tasks. Also the default is changed. The default IO priority for scrub is the idle class now. Some basic performance measurements have been done with the goal to measure which IO priority for scrub gives the best overall disk data throughput. The kernel was configured to use the CFQ IO
2012 Dec 06
1
ionice...
Hey, anyone has some successful experience with ionice? I tried it with 'idle' (-c 3) parameter. When I did a quick test (find /), it seemed to work with frequent pauses (I guess waiting for idle). But when I used it on my big tar, it made it worse than without... which seems counter-intuitive. Thx, JD
2007 Nov 15
5
IO causing major performance issues
Hello everyone. I'm wondering what other people's experiences are WRT systems becoming unresponsive (unable to ssh in, etc) for brief periods of time when a large amount of IO is being performed. It's really starting to cause a problem for us. We're on Dell PowerEdge 1955 blades - but this same issue has caused us problems on PE1950, PE1850, PE1750 servers. We're running
2008 Sep 02
3
Control IO related to a process
Is there a way to nice the IO on a process such as dd? If not, what could be a way to control the IO level of such a process from bogging down a server to severely. Thanks, jlc
2009 Jul 09
1
How to use ionice?
Hi, I need to rsync a remote live server to a local backup machine. The local backup machine is starting the rsync on scheduled basis (ie pulling from the remote) and I would like it to reduce the load on the remote live server by using nice/ionice at the far end. I'm connecting to the remote machine with ssh How can I get the remote server to be running it's part of the chain
2010 Jul 03
16
ionice
Hi Everyone, I''m experimenting with ionice, and I have to say at first impressions I''m very impressed! Does anyone have any idea how I could script the ionice config? I''m using phy for my DomUs so everything appears in ps as blkback.<DOMID>.xvda1 The problem is, is that the process id for the blkback process will change after every DomU restart,
2018 Aug 14
2
USB disk IO
Hello - frequently I turn on my external USB 3.0 disk and back. While my machine is copying and backing up my desktop becomes very sluggish. Is there a way to change that ? I am using CentOS 7.5 x86 with a very nice processor extra cores available and plenty of memory. There is no reason the "other" cores cannot keep the desktop going. Thanks, Jerry
2010 Jul 08
2
slow down dd - how?
How can I slow down dd? I don't want to slow down the pc, when generating a big file [~40 GByte]. Does ionice work properly? Thank you for any help! :\
2006 Jun 21
2
ZFS and Virtualization
Hi experts, I have few issues about ZFS and virtualization: [b]Virtualization and performance[/b] When filesystem traffic occurs on a zpool containing only spindles dedicated to this zpool i/o can be distributed evenly. When the zpool is located on a lun sliced from a raid group shared by multiple systems the capability of doing i/o from this zpool will be limited. Avoiding or limiting i/o to
2023 Aug 18
1
Updating samba with bookworm-backports
18.08.2023 10:47, spindles seven via samba ?????: > On August 16, 2023 spindles seven wrote: >> roy at franks:~$ sudo apt -t bookworm-backports install samba winbind >> Reading package lists... Done Building dependency tree... Done Reading state information... >> Done Some packages could not be installed. This may mean that you have >> requested an impossible situation
2023 Aug 18
1
Updating samba with bookworm-backports
On August 16, 2023 spindles seven wrote: > roy at franks:~$ sudo apt -t bookworm-backports install samba winbind > Reading package lists... Done Building dependency tree... Done Reading state information... > Done Some packages could not be installed. This may mean that you have > requested an impossible situation or if you are using the unstable distribution that > some required
2024 Mar 11
3
Updating to Samba Version 4.19.5 via Debian Bookworm Backports
11.03.2024 17:40, spindles seven via samba: > Hi > > After seeing that Bookworm Backports has now got Samba version 4.19.5, I decided to update my samba machines. However, I find that those running on AMD64 architecture, the update doesn't appear. Machines running on arm architectures (armel & arm64) are updated correctly. I haven't changed anything in the
2006 Sep 28
13
jbod questions
Folks, We are in the process of purchasing new san/s that our mail server runs on (JES3). We have moved our mailstores to zfs and continue to have checksum errors -- they are corrected but this improves on the ufs inode errors that require system shutdown and fsck. So, I am recommending that we buy small jbods, do raidz2 and let zfs handle the raiding of these boxes. As we need more
2007 Jun 22
1
Implicit storage tiering w/ ZFS
I''m curious if there has been any discussion of or work done toward implementing storage classing within zpools (this would be similar to the storage foundation QoSS feature). I''ve searched the forum and inspected the documentation looking for a means to do this, and haven''t found anything, so pardon the post if this is redundant/superfluous. I would imagine this would
2010 Feb 16
2
Speed question: 8-disk RAIDZ2 vs 10-disk RAIDZ3
I currently am getting good speeds out of my existing system (8x 2TB in a RAIDZ2 exported over fibre channel) but there''s no such thing as too much speed, and these other two drive bays are just begging for drives in them.... If I go to 10x 2TB in a RAIDZ3, will the extra spindles increase speed, or will the extra parity writes reduce speed, or will the two factors offset and leave things
2009 Jul 27
3
I/O load distribution
Hi, What is the best way to deal with I/O load when running several VMs on a physical machine with local or remote storage? What I'm primarily worried about is the case when several VMs cause disk I/O at the same time. One example would be the "updatedb" cronjob of the mlocate package. If you have say 5 VMs running on a physical System with a local software raid-1 as storage and
2010 Dec 21
5
relationship between ARC and page cache
One thing I''ve been confused about for a long time is the relationship between ZFS, the ARC, and the page cache. We have an application that''s a quasi-database. It reads files by mmap()ing them. (writes are done via write()). We''re talking 100TB of data in files that are 100k->50G in size (the files have headers to tell the app what segment to map, so mapped chunks
2009 Jul 03
1
Bug#535562: logcheck runs at normal I/O priority, and is hard-coded to nice -n10
Package: logcheck Version: 1.2.69 Severity: normal logcheck is a "batchy" job, but currently runs at normal I/O priority, and is hard-coded to run with a niceness of 10. As a result logcheck can degrade interactive performance on machines with a lot of log traffic, relatively slow CPU or expensive I/O. It'd be useful if the "ionice" and "schedtool" utilities