similar to: Q: grow zpool build on top of iSCSI devices

Displaying 20 results from an estimated 3000 matches similar to: "Q: grow zpool build on top of iSCSI devices"

2007 Jul 12
2
[AVS] Question concerning reverse synchronization of a zpool
Hi, I''m struggling to get a stable ZFS replication using Solaris 10 110/06 (actual patches) and AVS 4.0 for several weeks now. We tried it on VMware first and ended up in kernel panics en masse (yes, we read Jim Dunham''s blog articles :-). Now we try on the real thing, two X4500 servers. Well, I have no trouble replicating our kernel panics there, too ... but I think I
2007 Jun 19
0
Re: [storage-discuss] Performance expectations of iscsi targets?
Paul, > While testing iscsi targets exported from thumpers via 10GbE and > imported 10GbE on T2000s I am not seeing the throughput I expect, > and more importantly there is a tremendous amount of read IO > happending on a purely sequential write workload. (Note all systems > have Sun 10GbE cards and are running Nevada b65.) The read IO activity you are seeing is a direct
2008 Sep 16
3
iscsi target problems on snv_97
I''ve recently upgraded my x4500 to Nevada build 97, and am having problems with the iscsi target. Background: this box is used to serve NFS underlying a VMware ESX environment (zfs filesystem-type datasets) and presents iSCSI targets (zfs zvol datasets) for a Windows host and to act as zoneroots for Solaris 10 hosts. For optimal random-read performance, I''ve configured a single
2008 Jan 31
3
I.O error: zpool metadata corrupted after powercut
Last 2 weeks we had 2 zpools corrupted. Pool was visible via zpool import, but could not be imported anymore. During import attempt we got I/O error, After a first powercut we lost our jumpstart/nfsroot zpool (another pool was still OK). Luckaly jumpstart data was backed up and easely restored, nfsroot Filesystems where not but those where just test machines. We thought the metadata corruption
2009 Nov 20
1
Using local disk for cache on an iSCSI zvol...
I''m just wondering if anyone has tried this, and what the performance has been like. Scenario: I''ve got a bunch of v20z machines, with 2 disks. One has the OS on it, and the other is free. As these are disposable client machines, I''m not going to mirror the OS disk. I have a disk server with a striped mirror zpool, carved into a bunch of zvols, each exported via
2008 Mar 12
3
Mixing RAIDZ and RAIDZ2 zvols in the same zpool
I have a customer who has implemented the following layout: As you can see, he has mostly raidz zvols but has one raidz2 in the same zpool. What are the implications here? Is this a bad thing to do? Please elaborate. Thanks, Scott Gaspard Scott.J.Gaspard at Sun.COM > NAME STATE READ WRITE CKSUM > > chipool1 ONLINE 0 0 0 > >
2009 Jul 29
0
LVM and ZFS
I''m curious about if there are any potential problems with using LVM metadevices as ZFS zpool targets. I have a couple of situations where using a device directly by ZFS causes errors on the console about "Bus and lots of "stalled" I/O. But as soon as I wrap that device inside an LVM metadevice and then use it in the ZFS zpool things work perfectly fine and smoothly (no
2008 Jan 30
2
Convert MBOX
To all, I am using dovecot --version 1.0.10 and trying to convert MBOXes to MailDir's with the end goal of creating one folder filled with users old MBOXes that when they log in for the first time will be converted to Mail Dir format. I tried this and it did not work and gave me this output; <snip> default_mail_env = maildir:%h/mail/ #convert_mail =
2007 Oct 05
2
zfs + iscsi target + vmware esx server
I''m posting here as this seems to be a zfs issue. We also have an open ticket with Sun support and I''ve heard another large sun customer also is reporting this as an issue. Basic Problem: Create a zfs file system and set shareiscsi to on. On a vmware esx server discover that iscsi target. It shows up as 249 luns. When attempting to then add the storage esx server eventually
2008 Aug 04
3
DomU with ZFS root on iSCSI - any tips?
Hi Folks, Just wondering if anyone had any tips for trying to install a NV 94 DomU with ZFS root to an iSCSI Target? The iSCSI Target happens to be a NV 94 system with ZVOLs exported as the Targets, but I wouldn''t think that would matter. I tried this last week and the install seemed to complete fine, but when the DomU attempted to reboot after install, I received a message to the
2009 Jan 27
5
Replacing HDD in x4500
The vendor wanted to come in and replace an HDD in the 2nd X4500, as it was "constantly busy", and since our x4500 has always died miserably in the past when a HDD dies, they wanted to replace it before the HDD actually died. The usual was done, HDD replaced, resilvering started and ran for about 50 minutes. Then the system hung, same as always, all ZFS related commands would just
2012 Jun 10
0
A disk on Thumper giving random CKSUM error counts
Hello all, As some of you might remember, there is a Sun Fire X4500 (Thumper) server that I was asked to help upgrade to modern disks. It is still in a testing phase, and the one UltraStar 3Tb currently available to the server''s owners is humming in the server, with one small partition on its tail which replaced a dead 250Gb disk earlier in to pool. The OS is still SXCE snv_117 so
2007 Oct 27
14
X4500 device disconnect problem persists
After applying 125205-07 on two X4500 machines running Sol10U4 and removing "set sata:sata_func_enable = 0x5" from /etc/system to re-enable NCQ, I am again observing drive disconnect error messages. This in spite of the patch description which claims multiple fixes in this area: 6587133 repeated DMA command timeouts and device resets on x4500 6538627 x4500 message logs contain multiple
2013 Jul 22
3
zpool on a zvol inside zpool
Hi. I'm moving some of my geli installation to a new machine. On an old machine it was running UFS. I use ZFS on a new machine, but I don't have an encrypted main pool (and I don't want to), so I'm kinda considering a way where I will make a zpool on a zvol encrypted by geli. Would it be completely insane (should I use UFS instead ?) or would it be still valid ? Thanks. Eugene.
2011 Jul 12
1
Can zpool permanent errors fixed by scrub?
Hi, we had a server that lost connection to fiber attached disk array where data luns were housed, due to 3510 power fault. After connection restored alot of the zpool status had these permanent errors listed as per below. I check the files in question and as far as I could see they were present and ok. I ran a zpool scrub against other zpools and they came back with no errors and the list of
2008 Jul 17
1
How do you grow a ZVOL?
I''ve looked for anything I can find on the topic, but there does not appear to be anything documented. Can a ZVOL be expanded? In particular, can a ZVOL sharded via iscsi be expanded? Thanks, Charles
2006 Nov 01
56
ZFS/iSCSI target integration
Rick McNeal and I have been working on building support for sharing ZVOLs as iSCSI targets directly into ZFS. Below is the proposal I''ll be submitting to PSARC. Comments and suggestions are welcome. Adam ---8<--- iSCSI/ZFS Integration A. Overview The goal of this project is to couple ZFS with the iSCSI target in Solaris specifically to make it as easy to create and export ZVOLs
2009 Jun 16
3
Adding zvols to a DomU
I''m trying to add extra zvols to a Solaris10 DomU, sv_113 Dom0 I can use virsh attach-disk <name> <zvol> hdb --device phy to attach the zvol as c0d1. Replacing hdb by hdd gives me c1d1 but then that is it. Being able to attach several more zvols would be nice but even being able to get at c1d0 would be useful Am I missing something or can I only attach to hda/hdb/hdd?
2006 Oct 31
3
zfs: zvols minor #''s changing and causing probs w/ volumes
Team, **Please respond to me and my coworker listed in the Cc, since neither one of us are on this alias** QUICK PROBLEM DESCRIPTION: Cu created a dataset which contains all the zvols for a particular zone. The zone is then given access to all the zvols in the dataset using a match statement in the zoneconfig (see long problem description for details). After the initial boot of the zone
2006 Sep 06
2
creating zvols in a non-global zone (or ''Doctor, it hurts when I do this'')
A colleague just asked if zfs delegation worked with zvols too. Thought I''d give it a go and got myself in a mess (tank/linkfixer is the delegated dataset): root at non-global / # zfs create -V 500M tank/linkfixer/foo cannot create device links for ''tank/linkfixer/foo'': permission denied cannot create ''tank/linkfixer/foo'': permission denied Ok, so