similar to: Sharing with zfs

Displaying 20 results from an estimated 100 matches similar to: "Sharing with zfs"

2010 Nov 30
0
Resizing ZFS block devices and sbdadm
sbdadm can be used with a regular ZFS file or a ZFS block device. Is there an advatage to using a ZFS block device and exporting it to comstar via sbdadm as opposed to using a file and exporting it? (e.g. performance or manageability?) Also- let''s say you have a 5G block device called pool/test You can resize it by doing: zfs set volsize=10G pool/test However if the device was already
2010 May 04
8
iscsitgtd failed request to share on zpool import after upgrade from b104 to b134
Hi, I am posting my question to both storage-discuss and zfs-discuss as I am not quite sure what is causing the messages I am receiving. I have recently migrated my zfs volume from b104 to b134 and upgraded it from zfs version 14 to 22. It consist of two zvol''s ''vol01/zvol01'' and ''vol01/zvol02''. During zpool import I am getting a non-zero exit code,
2010 Jul 02
14
NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?
I see in NexentaStor''s announcement of Community Edition 3.0.3 they mention some backported patches in this release. Aside from their management features / UI what is the core OS difference if we move to Nexenta from OpenSolaris b134? These DeDup bugs are my main frustration - if a staff member does a rm * in a directory with dedup you can take down the whole storage server - all with
2012 Sep 28
2
iscsi confusion
I am confused, because I would have expected a 1-to-1 mapping, if you create an iscsi target on some system, you would have to specify which LUN it connects to. But that is not the case... I read the man pages for sbdadm, stmfadm, itadm, and iscsiadm. I read some online examples, where you first "sbdadm create-lu" which gives you a GUID for a specific device in the system, and then
2011 May 10
5
Modify stmf_sbd_lu properties
Is it possible to modify the GUID associated with a ZFS volume imported into STMF? To clarify- I have a ZFS volume I have imported into STMF and export via iscsi. I have a number of snapshots of this volume. I need to temporarily go back to an older snapshot without removing all the more recent ones. I can delete the current sbd LU, clone the snapshot I want to test, and then bring that back in
2010 Feb 17
2
Howto convert Solaris 10 DomU from hvm to paravirt?
Is it possible to convert Solaris 10 DomU from hvm to pv?? Dom0 is Opensolaris b132. Possibly any Documentation about this? Here is config of Solaris DomU: <domain type=''xen'' id=''1''> <name>Solaris-10</name> <uuid>c505f6c3-b37b-8268-fe46-f10b3277238b</uuid> <memory>2097152</memory>
2010 Jan 20
4
OSOL Bug 13743
Anyone knows if this is something that will be looked at before b134 is released? Bug 13743 - virsh and xm is unable to start domain first time after boot http://defect.opensolaris.org/bz/show_bug.cgi?id=13743 Regards Henrik http://sparcv9.blogspot.com
2010 Jul 14
5
packets from DomU on VLAN disappear
Hi, I have a problem similar to the previous poster (thread: Virtual network of DomU''s), but apparently not with similar solutions... Situation: - two physical links aggregated to "default0" - dladm create-vlan on default0 with several different vlans - xVM DomU running linux connected to the same vlans when trying to ping the DomU from Dom0, I can see with snoop (in Dom0)
2009 Aug 28
0
Comstar and ESXi
Hello all, I am running an OpenSolaris server running 06/09. I installed comstar and enabled it. I have an ESXi 4.0 server connecting to Comstar via iscsi on its own switch. (There are two esxi servers), both of which do this regardless of whether one is on or off. The error I see is on esxi "Lost connectivity to storage device naa.600144f030bc450000004a9806980003. Path vmhba33:C0:T0:L0 is
2010 Dec 17
2
Supermicro AOC-SAT2-MV8 and 1TB Seagate Barracuda ES.2
Hi all, I''m getting a very strange problem with a recent OpenSolaris b134 install. System is: Supermicro X5DP8-G2 BIOS 1.6a 2x Supermicro AOC-SAT2-MV8 1.0b 11 Seagate Barracuda 1TB ES.2 ST31000340NS drives If I have any of the 11 1TB Seagate drives plugged into the controller, the AOC-SAT2-MV8 BIOS appears to detect them just fine, but I get the following problems: 1. Grub takes a
2010 Oct 23
2
No ACL inheritance with aclmode=passthrough in onnv-134
Hi list, while preparing for the changed ACL/mode_t mapping semantics coming with onnv-147 [1], I discovered that in onnv-134 on my system ACLs are not inherited when aclmode is set to passthrough for the filesystem. This very much puzzles me. Example: $ uname -a SunOS os 5.11 snv_134 i86pc i386 i86pc $ pwd /Volumes/ACLs/dir1 $ zfs list | grep /Volumes rpool/Volumes 7,00G 39,7G 6,84G
2010 Mar 27
14
b134 - Mirrored rpool won''t boot unless both mirrors are present
I have two 500 GB drives on my system that are attached to built-in SATA ports on my Asus M4A785-M motherboard, running in AHCI mode. If I shut down the system, remove either drive, and then try to boot the system, it will fail to boot. If I disable the splash screen, I find that it will display the SunOS banner and the hostname, but it never gets as far as the "Reading ZFS config:"
2010 Aug 30
5
pool died during scrub
I have a bunch of sol10U8 boxes with ZFS pools, most all raidz2 8-disk stripe. They''re all supermicro-based with retail LSI cards. I''ve noticed a tendency for things to go a little bonkers during the weekly scrub (they all scrub over the weekend), and that''s when I''ll lose a disk here and there. OK, fine, that''s sort of the point, and they''re
2010 Aug 21
1
Upgrade Nevada Kernel
Hi, I hit ZFS bug that it would be resolve in latter snv 134 or latter. I''m running SNV111 How do I upgrade to latest version ? THanks -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100821/7d8eebc1/attachment.html>
2011 May 17
3
Reboots when importing old rpool
I have a fresh install of Solaris 11 Express on a new SSD. I have inserted the old hard disk, and tried to import it, with: # zpool import -f <long id number> Old_rpool but the computer reboots. Why is that? On my old hard disk, I have 10-20 BE, starting with OpenSolaris 2009.06 and upgraded to b134 up to snv_151a. I also have a WinXP entry in GRUB. This hard disk is partitioned, with a
2010 Aug 03
2
When is the L2ARC refreshed if on a separate drive?
I''m running a mirrored pair of 2 TB SATA drives as my data storage drives on my home workstation, a Core i7-based machine with 10 GB of RAM. I recently added a sandforce-based 60 GB SSD (OCZ Vertex 2, NOT the pro version) as an L2ARC to the single mirrored pair. I''m running B134, with ZFS pool version 22, with dedup enabled. If I understand correctly, the dedup table should be in
2010 Sep 25
4
dedup testing?
Hi all Has anyone done any testing with dedup with OI? On opensolaris there is a nifty "feature" that allows the system to hang for hours or days if attempting to delete a dataset on a deduped pool. This is said to be fixed, but I haven''t seen that myself, so I''m just wondering... I''ll get a 10TB test box released for testing OI in a few weeks, but before
2010 Mar 29
19
sharing a ssd between rpool and l2arc
Hi, as Richard Elling wrote earlier: "For more background, low-cost SSDs intended for the boot market are perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root and the rest for an L2ARC. For small form factor machines or machines with max capacity of 8GB of RAM (a typical home system) this can make a pleasant improvement over a HDD-only implementation." For the upcoming
2010 Apr 21
2
HELP! zpool corrupted data
Hello, Due to a power outage our file server running FreeBSD 8.0p2 will no longer come up due to zpool corruption. I get the following output when trying to import the ZFS pool using either a FreeBSD 8.0p2 cd or the latest OpenSolaris snv_143 cd: FreeBSD mfsbsd 8.0-RELEASE-p2.vx.sk:/usr/obj/usr/src/sys/GENERIC amd64 mfsbsd# zpool import pool: tank id: 1998957762692994918 state: FAULTED
2011 Jun 30
1
cross platform (freebsd) zfs pool replication
Hi, I have two servers running: freebsd with a zpool v28 and a nexenta (opensolaris b134) running zpool v26. Replication (with zfs send/receive) from the nexenta box to the freebsd works fine, but I have a problem accessing my replicated volume. When I''m typing and autocomplete with tab key the command cd /remotepool/us (for /remotepool/users) I get a panic. check the panic @