similar to: EOF broken on zvol raw devices?

Displaying 11 results from an estimated 11 matches similar to: "EOF broken on zvol raw devices?"

2008 Dec 07
2
zvol_read() and zvol_write().
I can''t find anything using those functions. Can they be removed? -- Pawel Jakub Dawidek http://www.wheel.pl pjd at FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type:
2009 Oct 17
3
zvol used apparently greater than volsize for sparse volume
What does it mean for the reported value of a zvol volsize to be less than the product of used and compressratio? For example, # zfs get -p all home1/home1mm01 NAME PROPERTY VALUE SOURCE home1/home1mm01 type volume - home1/home1mm01 creation 1254440045 - home1/home1mm01 used 14902492672
2013 Nov 09
1
10.0 BETA 3 with redports kernel panic
The redbuild boxes for redports are doing a very good and reliable job getting kernel panics out of 10.0: http://people.freebsd.org/~sbruno/redbuild_panic.txt Pretty frequent and pretty nasty. Happening on multiple machines under load. sean -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc:
2010 Sep 09
3
Volsize for DomU
Hey all I''ve created a Xen DomU on snv_134 , its Debian Lenny. For the disk, I''ve used a ZFS volume, which I accidentally set to 1GB. I''ve tried setting the volsize of the volume to 3GB and rebooting the domain, but this still only sees the initial 1GB disk. I''ve read about rebooting for volsize to take effect, but this seems to be in the context of either
2008 Jul 28
0
''referenced'' bigger than ''volsize''?
Hi, a zfs create -V 1M pool/foo dd if=/dev/random of=/dev/zvol/rdsk/pool/foo bs=1k count=1k (using Nevada b94) yields zfs get all pool/foo pool/foo used 1,09M - pool/foo referenced 1,09M - pool/foo volsize 1M - pool/foo refreservation 1M local Why is ''referenced'' bigger
2008 Oct 30
7
Is there any way to check if DTrace is running or a DTrace probe is enabled?
Hi, I am adding DTrace probes within NFS v3 client. In my current implementation, I use some tsd_*() functions and kmem_zalloc() function. These functions might be heavy and affect the performance. I want to call this function only when DTrace is running or the DTrace probes are enable. So is there a way to check DTrace is running or DTrace probe is enabled? Regards, Danhua
2010 Nov 30
0
Resizing ZFS block devices and sbdadm
sbdadm can be used with a regular ZFS file or a ZFS block device. Is there an advatage to using a ZFS block device and exporting it to comstar via sbdadm as opposed to using a file and exporting it? (e.g. performance or manageability?) Also- let''s say you have a 5G block device called pool/test You can resize it by doing: zfs set volsize=10G pool/test However if the device was already
2007 Feb 24
1
zfs received vol not appearing on iscsi target list
Just installed Nexenta and I''ve been playing around with zfs. root at hzsilo:/tank# uname -a SunOS hzsilo 5.11 NexentaOS_20070105 i86pc i386 i86pc Solaris root at hzsilo:/tank# zfs list NAME USED AVAIL REFER MOUNTPOINT home 89.5K 219G 32K /export/home tank 330K 1.78T 51.9K /tank tank/iscsi_luns 147K
2009 Jun 08
4
[caiman-discuss] Can not delete swap on AI sparc
Hi Richard, Richard Robinson wrote: > I should add that I also used truss and saw the same ENOMEM error. I am on a 4Gb system with swap -l reporting > > swapfile dev swaplo blocks free > /dev/zvol/dsk/rpool/swap 181,1 8 4194296 4194296 > > and I was trying to follow the directions for increasing swap here: >
2006 Jan 02
16
DTrace provider for NFS
FYI, I posted a blog a few days ago about a DTrace provider for NFS that is currently in development: http://blogs.sun.com/roller/page/samf?entry=a_dtrace_provider_for_nfs Let''s discuss any questions, comments, etc. here. I also advertised this on nfs-discuss at opensolaris.org. Naturally, I would expect the discussion here to be more on the specifics of DTrace, and the
2010 Oct 11
0
Ubuntu iSCSI install to COMSTAR zfs volume Howto
I apologize if this has been covered before. I have not seen a blow-by-blow installation guide for Ubuntu onto an iSCSI target. The install guides I have seen assume that you can make a target visible to all, which is a problem if you want multiple iSCSI installations on the same COMSTAR target. During install Ubuntu generates three random initiators and you have to deal with them to get things