Displaying 20 results from an estimated 6000 matches similar to: "Undo/reverse zpool create"
2007 Jun 13
5
drive displayed multiple times
So I just imported an old zpool onto this new system. The problem would be one drive (c4d0) is showing up twice. First it''s displayed as ONLINE, then it''s displayed as "UNAVAIL". This is obviously causing a problem as the zpool now thinks it''s in a degraded state, even though all drives are there, and all are online.
This pool should have 7 drives total,
2007 Feb 06
4
The ZFS MOS and how DNODES are stored
ZFS documentation lists snapshot limits on any single file system in a pool at 2**48 snaps, and that seems to logically imply that a snap on a file system does not require an update to the pool?s currently active uberblock. That is to say, that if we take a snapshot of a file system in a pool, and then make any changes to that file system, the copy on write behavior induced by the changes will
2010 Aug 12
6
one ZIL SLOG per zpool?
I have three zpools on a server and want to add a mirrored pair of ssd''s for the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or is it one ZIL SLOG device per zpool?
--
This message posted from opensolaris.org
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool.
root@:/$ zpool status
panic[cpu1]/thread=fffffe8000758c80: assertion failed:
2008 Apr 29
24
recovering data from a dettach mirrored vdev
Hi,
my system (solaris b77) was physically destroyed and i loosed data saved in a zpool mirror. The only thing left is a dettached vdev from the pool. I''m aware that uberblock is gone and that i can''t import the pool. But i still hope their is a way or a tool (like tct http://www.porcupine.org/forensics/) i can go too recover at least partially some data)
thanks in advance for
2010 Apr 06
15
Why we wont use zpool ever again
Hi everyone,
Just wanted to tell you a little story. We''ve been enthusiastic puppet
users since about a year ago here at the Geographic Institute of the
University of Zürich.
But we won''t use the zpool type ever again. Its just not worth it.
Here''s what happened:
. one of our servers lost knowledge about one of its zfs pools
. puppet didn''t find the pool
2010 Jan 12
6
x4500/x4540 does the internal controllers have a bbu?
Has anyone worked with a x4500/x4540 and know if the internal raid controllers have a bbu? I''m concern that we won''t be able to turn off the write-cache on the internal hds and SSDs to prevent data corruption in case of a power failure.
--
This message posted from opensolaris.org
2007 Apr 27
2
Scrubbing a zpool built on LUNs
I''m building a system with two Apple RAIDs attached. I have hardware RAID5 configured so no RAIDZ or RAIDZ2, just a basic zpool pointing at the four LUNs representing the four RAID controllers. For on-going maintenance, will a zpool scrub be of any benefit? From what I''ve read with this layer of abstration ZFS is only maintaining the metadata and not the actual data on the
2008 Mar 12
3
Mixing RAIDZ and RAIDZ2 zvols in the same zpool
I have a customer who has implemented the following layout: As you can
see, he has mostly raidz zvols but has one raidz2 in the same zpool.
What are the implications here? Is this a bad thing to do? Please
elaborate.
Thanks,
Scott Gaspard
Scott.J.Gaspard at Sun.COM
> NAME STATE READ WRITE CKSUM
>
> chipool1 ONLINE 0 0 0
>
>
2010 Nov 06
10
Apparent SAS HBA failure-- now what?
My setup: A SuperMicro 24-drive chassis with Intel dual-processor
motherboard, three LSI SAS3081E controllers, and 24 SATA 2TB hard drives,
divided into three pools with each pool a single eight-disk RAID-Z2. (Boot
is an SSD connected to motherboard SATA.)
This morning I got a cheerful email from my monitoring script: "Zchecker has
discovered a problem on bigdawg." The full output is
2008 Jul 28
1
zpool status my_pool , shows a pulled disk c1t6d0 as ONLINE ???
New server build with Solaris-10 u5/08,
on a SunFire t5220, and this is our first rollout of ZFS and Zpools.
Have 8 disks, boot disk is hardware mirrored (c1t0d0 + c1t1d0)
Created Zpool my_pool as RaidZ using 5 disks + 1 spare:
c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and spare c1t7d0
I am working on alerting & recovery plans for disks failures in the zpool.
As a test, I have pulled disk
2011 Nov 05
4
ZFS Recovery: What do I try next?
I would like to pick the brains of the ZFS experts on this list: What
would you do next to try and recover this zfs pool?
I have a ZFS RAIDZ1 pool named bank0 that I cannot import. It was
composed of 4 1.5 TiB disks. One disk is totally dead. Another had
SMART errors, but using GNU ddrescue I was able to copy all the data
off successfully.
I have copied all 3 remaining disks as images using
2008 May 15
2
[storage-discuss] ZFS and fibre channel issues
The ZFS crew might be better to answer this question. (CC''d here)
--jc
William Yang wrote:
> I am having issues creating a zpool using entire disks with a fibre
> channel array. The array is a Dell PowerVault 660F.
> When I run "zpool create bottlecap c6t21800080E512C872d14
> c6t21800080E512C872d15", I get the following error:
> invalid vdev
2007 Aug 30
15
ZFS, XFS, and EXT4 compared
I have a lot of people whispering "zfs" in my virtual ear these days,
and at the same time I have an irrational attachment to xfs based
entirely on its lack of the 32000 subdirectory limit. I''m not afraid of
ext4''s newness, since really a lot of that stuff has been in Lustre for
years. So a-benchmarking I went. Results at the bottom:
2008 Nov 26
9
ZPool and Filesystem Sizing - Best Practices?
Hello,
We have a new Thor here with 24TB of disk in (first of many, hopefully).
We are trying to determine the bext practices with respect to file system
management and sizing. Previously, we have tried to keep each file system
to a max size of 500GB to make sure we could fit it all on a single tape,
and to minimise restore times and impact should we experience some kind of
volume
2013 Dec 09
1
10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure
Hi,
Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was
upgraded from 9.2-RELEASE?
I have two servers, with very different hardware (on is with soft raid
and the other have not) and after a zpool upgrade, no way to get the
server booting.
Do I miss something when upgrading?
I cannot get the error message for the moment. I reinstalled the raid
server under Linux and the other
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server.
We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57
2009 Jul 21
1
zpool import is trying to tell me something...
I recently had an X86 system (running Nexenta Elatte, if that matters -- b101 kernel, I think) suffer hardware failure and refuse to boot. I''ve migrated the disks into a SPARC system (b115) in an attempt to bring the data back online while I see about repairing the former system. However, I''m having some trouble with the import process:
hydra# zpool import
pool: tank
id:
2008 Aug 02
13
are these errors dangerous
Hi everyone,
I''ve been running a zfs fileserver for about a month now (on snv_91) and
it''s all working really well. I''m scrubbing once a week and nothing has
come up as a problem yet.
I''m a little worried as I''ve just noticed these messages in
/var/adm/message and I don''t know if they''re bad or just informational:
Aug 2 14:46:06
2006 Oct 25
4
Panic while scrubbing
Hello,
I am not sure if I am posting in the correct forum, but it seems somewhat zfs related, so I thought I''d share it.
While the machine was idle, I started a scrub. Around the time the scrubbing was supposed to be finished, the machine panicked.
This might be related to the ''metadata corruption'' that happened earlier to me. Here is the log, any ideas?
Oct 24