I have a oi_148a test box with a pool on physical HDDs, a volume in this pool shared over iSCSI with explicit commands (sbdadm and such), and this iSCSI target is initiated by the same box. In the resulting iSCSI device I have another ZFS pool "dcpool". Recently I found the iSCSI part to be a potential bottleneck in my pool operations and wanted to revert to using ZFS volume directly as the backing store for "dcpool". However it seems that there may be some extra data beside the zfs pool in the actual volume (I''d at least expect an MBR or GPT, and maybe some iSCSI service data as an overhead). One way or another, the "dcpool" can not be found in the physical zfs volume: ==# zdb -l /dev/zvol/rdsk/pool/dcpool -------------------------------------------- LABEL 0 -------------------------------------------- failed to unpack label 0 -------------------------------------------- LABEL 1 -------------------------------------------- failed to unpack label 1 -------------------------------------------- LABEL 2 -------------------------------------------- failed to unpack label 2 -------------------------------------------- LABEL 3 -------------------------------------------- failed to unpack label 3 == So the questions are: 1) Is it possible to skip iSCSI-over-loopback in this configuration? Preferably I would just specify a fixed offset (at which byte in the volume the "dcpool" data starts) and remove the iSCSI/networking overheads and see if they are the bottlenecks. 2) This configuration "zpool -> iSCSI -> zvol" was initially proposed as preferable over direct volume access by Darren Moffat as the fully supported way, see last comments here: http://blogs.oracle.com/darren/entry/compress_encrypt_checksum_deduplicate_with I still wonder why - the overhead is deemed negligible and there are more options quickly available, such as mounting the iSCSI device on another server? Now that I hit the problem of reverting to direct volume access, this makes sense ;) Thanks in advance for ideas or clarifications, //Jim Klimov
On Tue, May 31, 2011 at 5:47 PM, Jim Klimov <jim at cos.ru> wrote:> However it seems that there may be some extra data beside the zfs > pool in the actual volume (I''d at least expect an MBR or GPT, and > maybe some iSCSI service data as an overhead). One way or another, > the "dcpool" can not be found in the physical zfs volume: > > ==> # zdb -l /dev/zvol/rdsk/pool/dcpool > > -------------------------------------------- > LABEL 0 > -------------------------------------------- > failed to unpack label 0The volume is exported as whole disk. When given whole disk, zpool creates GPT partition table by default. You need to pass the partition (not the disk) to zdb.> So the questions are: > > 1) Is it possible to skip iSCSI-over-loopback in this configuration?Yes. Well, maybe. In Linux you can use kpartx to make the partitions available. I don''t know the equivalent command in Solaris. -- Fajar
> The volume is exported as whole disk. When given whole disk, zpool > creates GPT partition table by default. You need to pass the partition > (not the disk) to zdb.Yes, that is what seems to be the problem. However, for the zfs volumes (/dev/zvol/rdsk/pool/dcpool) there seems to be no concept of partitions, etc. inside of them - these are defined only for the iSCSI representation which I want to try and get rid of.> In Linux you can use kpartx to make the partitions available. I don''t > know the equivalent command in Solaris.Interesting... If only lofiadm could represent not a whole file, but a given "window" into it ;) At least, trying loopback mounts as well as directly the zfs volume with "fdisk", "parted" and such reveals that there are no noticeable iSCSI service data overheads in the addresable volume space: # parted /dev/zvol/rdsk/pool/dcpool print _device_probe_geometry: DKIOCG_PHYGEOM: Inappropriate ioctl for device Model: Generic Ide (ide) Disk /dev/zvol/rdsk/pool/dcpool: 4295GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 131kB 4295GB 4295GB zfs 9 4295GB 4295GB 8389kB But lofiadm doesn''t let me address that partition #1 as a separate device :( Thanks, //Jim Klimov -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110531/acfd4333/attachment.html>
> Disk /dev/zvol/rdsk/pool/dcpool: 4295GB > Sector size (logical/physical): 512B/512BJust to check, did you already try: zpool import -d /dev/zvol/rdsk/pool/ poolname ? thanks Andy.
> > Disk /dev/zvol/rdsk/pool/dcpool: 4295GB > > Sector size (logical/physical): 512B/512B > > > Just to check, did you already try: > > zpool import -d /dev/zvol/rdsk/pool/ poolname >Thanks for the sugestion. As a matter of fact, I did not try that. But it hasn''t helped (possibly tue to partitioning inside the volume): # zpool import -d /dev/zvol/dsk/pool dcpool cannot import ''dcpool'': no such pool available # zpool import -d /dev/zvol/rdsk/pool/ dcpool cannot import ''dcpool'': no such pool available //Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110601/7905bedd/attachment.html>