Displaying 20 results from an estimated 20000 matches similar to: "io:::start to nfs mount?"
2007 Apr 13
2
How to dtrace which process is writting a veritas volume?
Hi all,
How to dtrace a who is writing a veritas volume?
Thanks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070412/dfd8c254/attachment.html>
2006 Oct 26
2
What has been swapped out?
I have a SunRay server that I am looking at to determine some sizing requirements in my department. The machine has 16G of ram and 10G of swap. Currently, I have about 4G of swap used. I am wondering if dtrace/mdb can be used to find out what lwp/processes have been swapped out?
Any hints?
This message posted from opensolaris.org
2008 Oct 15
0
Code Review for NFS v3 client DTrace provider
Hi,
I have got an implementation of DTrace probes for NFS v3 client. Webrev
of my change is at:
http://cr.opensolaris.org/~danhua/webrev/
The provider and probes are described in attached proposal.
Welcome comments!
Regards,
Danhua
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: proposal-nfsv3client.txt
URL:
2008 Sep 10
1
Proposal for DTrace probes for NFS v3 client
Hi,
This is a proposal for probes for NFS v3 client(see attachment). These
probes are under a new DTrace provider: nfsv3client. These new probes
support tracing NFS v3 client activities such as sending RPC requests to
NFS v3 server and receiving RPC replies from NFS v3 server. Formats and
arguments of these new probes are quite similar to existing NFS v3
probes on server side (provider: nfsv3).
2011 Jul 22
0
failed creating protocol instance to play queued url
I run thunderbird (both 3.x and 5.x) against dovecot 2.x and frequently
see delays "loading message". I admit to having a very large mailbox and
running multiple imap clients.
I've turned on thunderbird debugging, and at least one pattern I see in
the log during the delays (maybe a minute long) is the failed protocol
message above. These are no the only delays, but are the ones
2005 Sep 22
0
io provider and files in a forceddirectio mounted filesystem
The following script is used as a first attempt to discover IO patterns in a
dbase setup:
#------------------------------------------------------------------
#pragma D option dynvarsize=128m
dtrace:::BEGIN
{
}
pid$target::kaio:entry
{
self->doit = 1;
}
pid$target::_aiodone:return
{
self->doit = 0;
}
io:::start
/self->doit || execname == "oracle"/
{
2008 May 20
7
IO probes and forcedirectio
Hi,
I''m working on some performance analysis with our database and it seems that when the file system (UFS) is mounted with forcedirectio, the IO probe are not triggered when an I/O event occurs.
Could you confirm that ? If so, why ?
Seb
--
This message posted from opensolaris.org
2003 Jan 13
1
Exporting samba mount via NFS
Perhaps I'm trying to do something that is not allowed, but I have found
no such restriction in the documentation.
To limit the contact of my Linux boxes with the "products of Bill &
Co.", I have mounted a Windows share on my Linux server. When I try to
export that same share from the Linux server (as a plain old NFS
volume), it mounts without error, but the mountpoint
2009 Sep 14
1
return from memset on mac osx
Does dtrace have a problem catching the return from memset on Mac OSX?
The script below catches the entry just fine but the return clause is
never entered.
Thanks, Joel
---
pid$target::memset:entry
/arg1 == 0/
{
self->size = arg2;
self->ts = timestamp;
self->vts = vtimestamp;
}
pid$target::memset:return
/self->size/
{
@ts = sum(timestamp - self->ts);
@vts =
2009 Aug 14
4
order bug, legacy mount and nfs sharing
Hi,
I''ve encountered this bug: http://www.opensolaris.org/jive/thread.jspa?threadID=108316&tstart=30
and to obviate to the problem I''m using legacy mounts.
Now the system boot without problems, but nfs server doesn''t start because couldn''t find any share.
So I''ve disabled nfs with zfs set sharenfs=off on my zfs filesystems and tried to use the share
2006 Jul 20
2
How can I watch IO operations with dtrace on zfs?
I have been using iosoop script (see http://www.opensolaris.org/os/community/dtrace/scripts/) written by Brendan Gregg to look at the IO operations of my application. When I was running my test-program on a UFS filesystem I could see both read and write operations like:
UID PID D BLOCK SIZE COMM PATHNAME
203803 4436 R 6016592 16384 diskio <none>
203803 4436 W 3448432
2015 Feb 27
2
Odd nfs mount problem
I'm exporting a directory, firewall's open on both machines (one CentOS
6.6, the other RHEL 6.6), it automounts on the exporting machine, but the
other server, not so much.
ls /mountpoint/directory eventually times out (directory being the NFS
mount). mount -t nfs server:/location/being/exported /mnt works... but an
immediate ls /mnt gives me stale file handle.
The twist on this: the
2014 Aug 01
2
Live blockcopy onto storage pool that is an NFS mount?
Hello,
I am running qemu-kvm 1.4.0 and libvirt 1.0.2 on Ubuntu 12.04. I have two NFS
mountpoints configured as two separate pools in virsh:
<pool type='dir'>
<name>nfs1</name>
<uuid>419d799c-2493-6ebc-6848-53b0919e7bad</uuid>
<capacity unit='bytes'>6836057014272</capacity>
<allocation unit='bytes'>0</allocation>
2007 Mar 14
3
I/O bottleneck Root cause identification w Dtrace ?? (controller or IO bus)
Dtrace and Performance Teams,
I have the following IO Performance Specific Questions (and I''m already
savy with the lockstat and pre-dtrace
utilities for performance analysis.. but in need of details regarding
specifying IO bottlenecks @ the controller or IO bus..) :
**Q.A*> Determining IO Saturation bottlenecks ( */.. beyond service
times and kernel contention.. )/
I''m
2013 May 24
2
NFS mount permissions
Hi everyone,
I would like to move my mails from ocfs2 to an NFS share.
As the mountpoint and all folders and files belong to nobody/nogroup,
dovecot is only able to access the mails if I give full access to
"others". I don't like that.
How do you NFS-using guys solve this problem?
Regards
Patrick
2015 Feb 27
1
Odd nfs mount problem [SOLVED]
m.roth at 5-cent.us wrote:
> m.roth at 5-cent.us wrote:
>> I'm exporting a directory, firewall's open on both machines (one CentOS
>> 6.6, the other RHEL 6.6), it automounts on the exporting machine, but
>> the
>> other server, not so much.
>>
>> ls /mountpoint/directory eventually times out (directory being the NFS
>> mount). mount -t nfs
2006 Sep 11
1
Looking for common dtrace scripts for NFS top talkers
We started seeing odd behaviour with clients somehow hammering our
ZFS-based NFS server. Nothing is obvious from mpstat/iostat/etc. I''ve
seen mention before of NFSv3 client dtrace scripts, and I was
wondering if there ever was one for the server end, displaying top
talkers, writes/reads, or locations of such to nail down abusive
clients short of using snoop/tcpdump to nail down via
2007 May 11
3
what files are being used from a NFS server perspective
I''m curious to know what files are being used from a NFS server
perspective. I''m unable to trace the client because it is in the boot
phase. Looking at the io provider, I believe it does not allow you to
look at what files are being requested. That would lead me to the fbt
provider. Does anyone know what fbt provides the file handle for NFS
client request on the NFS
2011 Dec 23
0
dtrace-discuss Digest, Vol 80, Issue 6
Have you considered the mdb dcmd ::pfiles?
mfe at inker:~/Code/dtrace/examples$ *pfexec mdb -k*
Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc pcplusmp
scsi_vhci zfs ip hook neti sockfs arp usba uhci fctl s1394 stmf stmf_sbd md
lofs random idm sd crypto fcp cpc fcip smbsrv nfs ufs logindmux ptm nsmb
sppp nsctl sdbc rdc sv ii ipc ]
> *::ps ! grep clock-applet*
R 1744
2011 Jan 05
0
dtrace-discuss Digest, Vol 69, Issue 2
Hello Srikant -
A quantization distributes the results of your aggregation into ranges
ordered by a power-of-two. Presumably what you''d do in your script is
capture the inclusive elapsed time of each function call in your library,
then use this quantization to see how tightly-banded the times are. Perhaps
there''s some blocking I/O in some of your calls, for example, in which