similar to: Any way for snapshot zpool with RAIDZ or for independent devices

Displaying 20 results from an estimated 20000 matches similar to: "Any way for snapshot zpool with RAIDZ or for independent devices"

2010 Nov 12
11
how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?
Hi, How I can I quiesce / freeze all writes to zfs and zpool if want to take hardware level snapshots or array snapshot of all devices under a pool ? are there any commands or ioctls or apis available ? Thanks & Regards, sridhar. -- This message posted from opensolaris.org
2010 Sep 20
5
create mirror copy of existing zfs stack
Hi, I have a mirror pool tank having two devices underneath. Created in this way #zpool create tank mirror c3t500507630E020CEAd1 c3t500507630E020CEAd0 Created file system tank/home #zfs create tank/home Created another file system tank/home #zfs create tank/home/sridhar After that I have created files and directories under tank/home and tank/home/sridhar. Now I detached 2nd device i.e
2008 Nov 03
0
Zpool with raidz+mirror = wrong size displayed?
Hi, I installed a zpool containing of zpool __mirror ____disk1 500gb ____disk2 500gb __raidz ____disk3 1tb ____disk4 1tb ____disk5 1tb It works fine, but it displays the wrong size (terminal -> zpool list). It should be 500gb (mirrored) + 2TB (3TB raidz) = 2,5 TB, right? But it displays it has 3,17TB diskspace available. I first created the mirror and then added the raidz to it (zpool add
2010 Oct 19
4
rename zpool
Hi, I have two questions: 1) Is there any way of renaming zpool without export/import ?? 2) If I took hardware snapshot of devices under a zpool ( where the snapshot device will be exact copy including metadata i.e zpool and associated file systems) is there any way to rename zpool name of snapshotted devices ?? without losing data part? Thanks & Regards, sridhar. -- This message posted
2012 Dec 30
4
Expanding a raidz vdev in zpool
Hello All, I have a zpool that consists of 2 raidz vdevs (raidz1-0 and raidz1-1). The first vdev is 4 1.5TB drives. The second was 4 500GB drives. I replaced the 4 500GB drives with 4 3TB drives. I replaced one at time, and resilvered each. Now the process is complete, I expected to have an extra 10TB (4*2.5TB) of raw space, but it''s still the same amount of space. I did an export and
2008 Mar 12
3
Mixing RAIDZ and RAIDZ2 zvols in the same zpool
I have a customer who has implemented the following layout: As you can see, he has mostly raidz zvols but has one raidz2 in the same zpool. What are the implications here? Is this a bad thing to do? Please elaborate. Thanks, Scott Gaspard Scott.J.Gaspard at Sun.COM > NAME STATE READ WRITE CKSUM > > chipool1 ONLINE 0 0 0 > >
2008 Jul 28
1
zpool status my_pool , shows a pulled disk c1t6d0 as ONLINE ???
New server build with Solaris-10 u5/08, on a SunFire t5220, and this is our first rollout of ZFS and Zpools. Have 8 disks, boot disk is hardware mirrored (c1t0d0 + c1t1d0) Created Zpool my_pool as RaidZ using 5 disks + 1 spare: c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and spare c1t7d0 I am working on alerting & recovery plans for disks failures in the zpool. As a test, I have pulled disk
2009 Oct 01
4
RAIDZ v. RAIDZ1
So, I took four 1.5TB drives and made RAIDZ, RAIDZ1 and RAIDZ2 pools. The sizes for the pools were 5.3TB, 4.0TB, and 2.67TB respectively. The man page for RAIDZ states that "The raidz vdev type is an alias for raidz1." So why was there a difference between the sizes for RAIDZ and RAIDZ1? Shouldn''t the size be the same for "zpool create raidz ..." and "zpool
2007 Apr 02
4
Convert raidz
Hi Is it possible to convert live 3 disks zpool from raidz to raidz2 And is it possible to add 1 new disk to raidz configuration without backups and recreating zpool from cratch. Thanks This message posted from opensolaris.org
2008 Jul 02
14
Changing GUID
Hi, How difficult would it be to write some code to change the GUID of a pool? ---- Thanks Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080701/cf494cc1/attachment.html>
2008 Nov 16
8
Mirror and RaidZ on only 3 disks
Hi, I have a small Linux server PC at home (Intel Core2 Q9300, 4 GB RAM), and I''m seriously considering switching to OpenSolaris (Indiana, 2008.11) in the near future, mainly because of ZFS. The idea is to run the existing CentOS 4.7 system inside a VM and let it NFS mount home directories and other filesystems from OpenSolaris. I might migrate more services from Linux over time, but for
2008 Dec 17
11
zpool detach on non-mirrored drive
I''m using zfs not to have access to a fail-safe backed up system, but to easily manage my file system. I would like to be able to, as I buy new harddrives, just to be able to replace the old ones. I''m very environmentally concious, so I don''t want to leave old drives in there to consume power as they''ve already been replaced by larger ones. However, ZFS
2010 Sep 07
3
zpool create using whole disk - do I add "p0"? E.g. c4t2d0 or c42d0p0
I have seen conflicting examples on how to create zpools using full disks. The zpool(1M) page uses "c0t0d0" but OpenSolaris Bible and others show "c0t0d0p0". E.g.: zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0 zpool create tank raidz c0t0d0p0 c0t1d0p0 c0t2d0p0 c0t3d0p0 c0t4d0p0 c0t5d0p0 I have not been able to find any discussion on whether (or when) to
2009 Aug 21
0
bug :zpool create allow member driver as the raw drive of full partition
IF you run solaris and opensolaris ?for example you my use c0t0d0 (for scsi disk) or c0d0 (for ide /SATA disk ) as the system disk. In default ,solaris x86 and opensolaris will use RAW driver : c0t0d0s0 (/dev/rdsk/c0t0d0s0) as the member driver of rpool. Infact, solaris2 partition can be more then one in each Hard Disk, so we also can use the RAW driver like : c0t0d0p1 (/dev/rdsk/c0t0d0p1)
2013 May 24
0
zpool resource fails with incorrect error
I''m working to expand / develop on the zpool built-in type, but the zpool command is failing and Puppet''s returned stderr is not what I get if I copy/paste the command given by the debug output. # cat /etc/puppet/manifests/zpool_raidz2.pp zpool { ''tank'': ensure => present, raidz => [ ''d01 d02 d03 d04'', ''d05 d06
2007 Sep 11
7
compression=on and zpool attach
I''ve got 12Gb or so of db+web in a zone on a ZFS filesystem on a mirrored zpool. Noticed during some performance testing today that its i/o bound but using hardly any CPU, so I thought turning on compression would be a quick win. I know I''ll have to copy files for existing data to be compressed, so I was going to make a new filesystem, enable compression and rysnc everything in,
2009 Oct 08
2
convert raidz from osx
I am converting a 4 disk raidz from osx to opensolaris. And I want to keep the data intact. I want zfs to get access to the full disk instead of a slice. I believe like c8d0 instead off c8d0s1. I wanted to do this 1 disk at a time and let it resilver. what is the proper way to do this. I tried, I believe from memory : zpool replace -f rpool c8d1s1 c8d1 but it didn''t let me do that. then I
2006 Oct 24
3
determining raidz pool configuration
Hi all, Sorry for the newbie question, but I''ve looked at the docs and haven''t been able to find an answer for this. I''m working with a system where the pool has already been configured and want to determine what the configuration is. I had thought that''d be with zpool status -v <poolname>, but it doesn''t seem to agree with the
2010 Aug 06
3
Reconfigure zpool
I have zpool like that pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz3-0 ONLINE 0 0 0 ___c6t0d0 ONLINE 0 0 0 ___c6t1d0 ONLINE 0 0 0 ___c6t2d0 ONLINE 0 0 0 ___c6t3d0 ONLINE 0 0 0 ___c6t4d0 ONLINE
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server. We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57