similar to: bug :zpool create allow member driver as the raw drive of full partition

Displaying 20 results from an estimated 200 matches similar to: "bug :zpool create allow member driver as the raw drive of full partition"

2005 Oct 26
1
Error message with fbt::copen:entry probe
All, The attached script is causing the following error message ... bash-3.00# ./zmon_bug.d dtrace: error on enabled probe ID 2 (ID 4394: fbt:genunix:copen:entry): invalid address (0xfd91747f) in predicate at DIF offset 120 dtrace: error on enabled probe ID 2 (ID 4394: fbt:genunix:copen:entry): invalid address (0xfef81a3f) in predicate at DIF offset 120 Any ideas? thxs Joe --
2010 Jul 09
4
resilver of older root pool disk
This is a hypothetical question that could actually happen: Suppose a root pool is a mirror of c0t0d0s0 and c0t1d0s0 and for some reason c0t0d0s0 goes off line, but comes back on line after a shutdown. The primary boot disk would then be c0t0d0s0 which would have much older data than c0t1d0s0. Under normal circumstances ZFS would know that c0t0d0s0 needs to be resilvered. But in this case
2004 Dec 20
1
panic with search
Hello, My imap daemon get SIGABRT with following message. "pool_data_stack_realloc(): stack frame changed" This is caused with cvs head sources.(and or not with my last 2 patches.) This causes while doing search command. This is IMAP command log: --------------------------------------------------------------------------- * PREAUTH [CAPABILITY IMAP4rev1 SORT THREAD=REFERENCES
2008 Jul 17
2
zfs sparc boot "Bad magic number in disk label"
Hello, I recently installed SunOS 5.11 snv_91 onto a Ultra 60 UPA/PCI with OpenBoot 3.31 and two 300GB SCSI disks. The root file system is UFS on c0t0d0s0. Following the steps in ZFS Admin I have attempted to convert root to ZFS utilizing c0t1d0s0. However, upon "init 6" I am always presented with: Bad magic number in disk label can''t open disk label package My Steps: 1)
2015 Jan 21
2
Shared folders - Namespace definition
Hello, I'm trying to configure shared mailboxes with ACL. My problem is FS layout. Our maildirs is completely outside of home dirs (home dirs is on pure SSD zpool, maildirs on separate HDD zpool). We are using checkpassword auth, which sets mailbox_location for each user. Layout is following: maildirs: /dpool/mail/maldirs/user-uuid/ home is: /dpool/mail/home/user-uuid/ index &
2015 Jan 21
0
Shared folders - Namespace definition
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wed, 21 Jan 2015, Peter Hodur wrote: > > maildirs: > > /dpool/mail/maldirs/user-uuid/ > > > home is: > > /dpool/mail/home/user-uuid/ > > > index & control is under home: > > /dpool/mail/home/user-uuid/[index|control] > > > the problem is how to specify path in NAMESPACE definition. I can use
2002 Feb 27
0
RE: ANY HOPE GETTING RESPONSE TO QUESTIONS? - samba3.0alpha15 - s olar is 8
Just replying to let you know I'm listening =)... Sorry Dave, I'm running 2.2.3a on my Solaris 8 machine and even that still has some issues with winbind. By the way, while I have the attention of you Solaris Gurus... I was trying to make my Ultra 1 boot from an external disk on the same scssi channel as the default boot disk. I changed my /etc/vfstab for the new disk to have the right
2010 Mar 27
16
zpool split problem?
Zpool split is a wonderful feature and it seems to work well, and the choice of which disk got which name was perfect! But there seems to be an odd anomaly (at least with b132) . Started with c0t1d0s0 running b132 (root pool is called rpool) Attached c0t0d0s0 and waited for it to resilver Rebooted from c0t0d0s0 zpool split rpool spool Rebooted from c0t0d0s0, both rpool and spool were mounted
2005 Sep 09
1
1.0alpha1: stack frame core
Hi, Today's core dump from 1.0alpha1 came from a syslog message of: IMAP(user): pool_data_stack_realloc(): stack frame changed gdb info on the resulting core dump attached. Question: how many people are building/using dovecot 1.0alpha1 with gcc 4.0.1 versus gcc 3.4.x? I am wondering if these issues come from the compiler instead of dovecot itself? Jeff Earickson Colby College
2006 Jul 19
1
Q: T2000: raidctl vs. zpool status
Hi all, IHACWHAC (I have a colleague who has a customer - hello, if you''re listening :-) who''s trying to build and test a scenario where he can salvage the data off the (internal ?) disks of a T2000 in case the sysboard and with it the on-board raid controller dies. If I understood correctly, he replaces the motherboard, does some magic to get the raid config back, but even
2010 Feb 27
1
slow zfs scrub?
hi all I have a server running svn_131 and the scrub is very slow. I have a cron job for starting it every week and now it''s been running for a while, and it''s very, very slow scrub: scrub in progress for 40h41m, 12.56% done, 283h14m to go The configuration is listed below, consisting of three raidz2 groups with seven 2TB drives each. The root fs is on a pair of X25M (gen 1)
2008 Oct 11
5
questions about replacing a raidz2 vdev disk with a larger one
I''d like to replace/upgrade two 500GB disks in RaidZ2 vdev with 1TB disks, but I have some preliminary questions/concerns before trying ''zfs replace dpool ?'' Will ZFS permit this replacement? Will ZFS use the extra space in a heterogeneous RaidZ2 vdev, or is the size limited by the smallest disk in the vdev? Thanks in advance, Vizzini The system is currently running
2010 Apr 20
1
libguestfs mounting solaris 10 ZFS guest
Not sure if this possible, but I have a KVM guest running Solaris 10 with the OS on ZFS and I am trying to use libguestfs/guestfish/guestmount to get to the VM. I am running Red Hat EL 5.4 with EPEL rpms as required. The VM is on a LV and it boots fine, but I can't seem to get the syntax correct to get libguestfs to deal with it. Guestmount seemed like the best option because it supports FUSE
2010 May 07
0
confused about zpool import -f and export
Hi, all, I think I''m missing a concept with import and export. I''m working on installing a Nexenta b134 system under Xen, and I have to run the installer under hvm mode, then I''m trying to get it back up under pv mode. In that process the controller names change, and that''s where I''m getting tripped up. I do a successful install, then I boot OK,
2011 Jan 28
2
ZFS root clone problem
(for some reason I cannot find my original thread..so I''m reposting it) I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive. This is in a Netra running Solaris 10. Originally what I did was: zpool attach -f rpool c0t0d0 c0t2d0. Then I did an installboot on c0t2d0s0. Didnt work. I was not able to boot from my second drive (c0t2d0). I cannot remember
2011 Jan 04
0
zpool import hangs system
Hello, I''ve been using Nexentastore Community Edition with no issues now for a while now, however last week I was going to rebuild a different system so I started to copy all the data off that to my to a raidz2 volume on me CE system. This was going fine until I noticed that they copy was stalled, as well as the entire system was non-responsive. I let it sit for several hours with no
2011 Aug 14
4
Space usage
I''m just uploading all my data to my server and the space used is much more than what i''m uploading; Documents = 147MB Videos = 11G Software= 1.4G By my calculations, that equals 12.547T, yet zpool list is showing 21G as being allocated; NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT dpool 27.2T 21.2G 27.2T 0% 1.00x ONLINE - It doesn''t look like
2011 Aug 05
0
Kernel panic on zpool import. 200G of data inaccessible! assertion failed: zvol_get_stats(os, nv) == 0
System: snv_151a 64 bit on Intel. Error: panic[cpu0] assertion failed: zvol_get_stats(os, nv) == 0, file: ../../common/fs/zfs/zfs_ioctl.c, line: 1815 Failure first seen on Solaris 10, update 8 History: I recently received two 320G drives and realized from reading this list it would have been better if I would have done the install on the small drives but I didn''t have them at the time.
2010 Jan 13
3
Recovering a broken mirror
We have a production SunFireV240 that had a zfs mirror until this week. One of the drives (c1t3d0) in the mirror failed. The system was shutdown and the bad disk replaced without an export. I don''t know what happened next but by the time I got involved there was no evidence that the remaining good disk (c1t2d0) had ever been part of a ZFS mirror. Using dd on the raw device I can see data
2011 Dec 21
8
Any rhyme or reason to disk dev names?
Hello, I am curious to know if there is an easy way to guess or identify the device names of disks. Previously the /dev/dsk/c0t0d0s0 system made sense to me... I had a SATA controller card with 8 ports, and they showed up with the numbers 1-8 in the "t" position of the device name. But I just built a new system with two LSI SAS HBAs in it, and my device names are along the lines of: