similar to: io provider and files in a forceddirectio mounted filesystem

Displaying 20 results from an estimated 100 matches similar to: "io provider and files in a forceddirectio mounted filesystem"

2008 Dec 01
7
DIF content is invalid?
What''s going on? # dtrace -s iotime_all.d 100 dtrace: failed to enable ''iotime_all.d'': DIF program content is invalid The errant script.... #pragma D option quiet BEGIN { stime = timestamp; io_count = 0; } io:::start /args[2]->fi_pathname != "<none>"/ { start[pid, args[2]->fi_pathname, args[0]->b_edev, args[0]->b_blkno,
2007 Jun 11
1
2 iosnoop scripts: different results
I am teaching a DTrace class and a student noticed that 2 iosnoop scripts run in two different windows were producing different results. I was not able to answer why this is. Can anyone explain this. Here are the reults from the two windows: # io.d ... sched 0 <none> 1024 dad1 W 0.156 bash 1998
2008 Dec 21
0
Profiling a recoll stress-test
Hi all, I don''t know if you''re familiar with recoll, it''s a very handy xapian based desktop search engine system. I''m trying to index a really big folder containing lots of files (18M), the disk size is ~220Gig. The files are quite small text files (mean size ~ 1K). The OS is latest leopard. I think the process is io-bound. 63061 recollinde 10.4% 3:58:06
2005 Sep 16
5
ddi_pathname
Hello, I can see that there is an implementation/emulation of ddi_pathname in DTrace, but I''m a bit confused about the capabilities and invocation of this function. I would like to diplay the path to the block device from bdev_strategy and other io:genunix::start probes. If someone is familiar with ddi_pathname, could you please provide an example invocation? Thanks, Michael This
2008 Jan 31
1
simulating directio on zfs?
The big problem that I have with non-directio is that buffering delays program execution. When reading/writing files that are many times larger than RAM without directio, it is very apparent that system response drops through the floor- it can take several minutes for an ssh login to prompt for a password. This is true both for UFS and ZFS. Repeat the exercise with directio on UFS and there is no
2011 Oct 26
1
Re: ceph on btrfs [was Re: ceph on non-btrfs file systems]
2011/10/26 Sage Weil <sage@newdream.net>: > On Wed, 26 Oct 2011, Christian Brunner wrote: >> >> > Christian, have you tweaked those settings in your ceph.conf?  It would be >> >> > something like ''journal dio = false''.  If not, can you verify that >> >> > directio shows true when the journal is initialized from your osd log?
2008 Dec 17
12
disk utilization is over 200%
Hello, I use Brendan''s sysperfstat script to see the overall system performance and found the the disk utilization is over 100: 15:51:38 14.52 15.01 200.00 24.42 0.00 0.00 83.53 0.00 15:51:42 11.37 15.01 200.00 25.48 0.00 0.00 88.43 0.00 ------ Utilisation ------ ------ Saturation ------ Time %CPU %Mem %Disk %Net CPU Mem
2013 Dec 02
2
lastes sources don't include "drop_cache" option
Was there some reason that patch got dropped? Otherwise rsync eats up all the buffer memory. Note -- I tried directio -- didn't work due to alignment issues -- buffers have to be aligned to sectors. The kernel, if I remember correctly, has been on again/off again on requiring alignment on directio -- because most of the drivers and devices do for directio to work, at. "dd"
2004 Aug 23
1
just fyi
remember that even tho you have a filesystem etc it's always good to have seperate partitions for stuff. eg don't put all your redologs and mirrors on the same partition or don't put your archives on the same partition as data. this is not just for ocfs, but just .. common sense I think to split stuff up.
2009 Aug 24
2
[RFC] Early look at btrfs directIO read code
This is my still-working-on-it code for btrfs directIO read. I''m posting it so people can see the progress being made on the project and can take an early shot at telling me this is just a bad idea and I''m crazy if they want to, or point out where I made some stupid mistake with btrfs core functions. The code is not complete and *NOT* ready for review or testing. I looked at
2008 Mar 04
0
Device-mapper-multipath not working correctly with GNBD devices
Hi all, I am trying to configure a failover multipath between 2 GNBD devices. I have a 4 nodes Redhat Cluster Suite (RCS) cluster. 3 of them are used for running services, 1 of them for central storage. In the future I am going to introduce another machine for central storage. The 2 storage machine are going to share/export the same disk. The idea is not to have a single point of failure
2011 Sep 30
1
Issue with reshare shares
Hi there, we have a customer with some old Windows 98 stuff. The network has been migrated to a 2008 domain and Windows 98 won't authenticate (and thus can not access shares, also all computer names (DC's / fileservers) are longer than 8 characters). Figured I would mount the shares on the fileservers on a linux box (ubuntu 10.04 LTS 64 bit server) and then export them again through
2007 Apr 13
2
How to dtrace which process is writting a veritas volume?
Hi all, How to dtrace a who is writing a veritas volume? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070412/dfd8c254/attachment.html>
2004 Apr 22
1
A couple more minor questions about OCFS and RHE L3
Sort of a followup... We've been running OCFS in sync mode for a little over a month now, and it has worked reasonably well. Performance is still a bit spotty, but we're told that the next kernel update for RHEL3 should improve the situation. We might eventually move to Polyserve's cluster filesystem for its multipathing capability and potentially better performance, but at least we
2008 Jan 18
33
LatencyTop
I see Intel has released a new tool. Oh, it requires some patches to the kernel to record latency times. Good thing people don''t mind patching their kernels, eh? So who can write the equivalent latencytop.d the fastest? ;-) http://www.latencytop.org/ -- cburgess at qnx.com
2011 May 27
0
Slow performance with cifs client
Hi all. I have a problem with the cifs module in my gigabit network. I get the following performance: 95 mbytes/s with FTP 65 mbytes/s with samba, windows 7 client 8 mbytes/ with the cifs module on opensuse 11.4 I have tried all the solutions found on google, such as directio, modifying rsize and wsize, with no improvements. Any advice? Is this the right place to discuss issues with cifs? P
2018 Mar 29
0
Re: [PATCH v7 6/6] v2v: Add -o rhv-upload output mode (RHBZ#1557273).
And another problem ... nbdkit: python[1]: error: /home/rjones/d/libguestfs/tmp/rhvupload.riX9kG/rhv-upload-plugin.py: callback failed: pwrite Traceback (most recent call last): File "/home/rjones/d/libguestfs/tmp/rhvupload.riX9kG/rhv-upload-plugin.py", line 268, in pwrite http.send(buf) File "/usr/lib64/python3.6/http/client.py", line 986, in send
2009 Mar 11
0
dbox on CIFS
Ok - at this time I am using the following non-default settings: Samba server - smb.conf [Mailstorage] ea support = Yes use sendfile = Yes fake oplocks = Yes delete readonly = Yes Dovecot - dovecot.conf mmap_disable = yes dotlock_use_excl = yes fsync_disable = yes lock_method = dotlock I'm still testing - but at this time I'm seeing no errors. Two
2008 Nov 05
0
Curious Question about Multiple CIFSD's
I know this isn't the right place to ask this question, but does anybody know if it's possible to force a Linux client machine to spawn multiple cifsd's when connecting to a SINGLE Samba Server? I seem to be running into some Linux cifs client limits with a single connection. One cifs client can talk to multiple Samba servers at around 100 MB/sec (aggregate) over a single GigE
2010 Apr 30
1
Cannot mount Windows 7 share with CIFS Error 112 Host is down
Hi. I just got a new Windows 7 Home Edition computer and am unable to mount its shares on my Linux system. I'm running Fedora 11, samba 3.4.7 I have no trouble mounting shares from XP systems on the network using the mount command below. I can access the Windows 7 share with no problems using smbclient on Linux. The Windows 7 share is accessible from the XP systems. Here is the mount