Displaying 20 results from an estimated 1000 matches similar to: "zfs concatenation to mirror"
2008 Jul 28
1
zpool status my_pool , shows a pulled disk c1t6d0 as ONLINE ???
New server build with Solaris-10 u5/08,
on a SunFire t5220, and this is our first rollout of ZFS and Zpools.
Have 8 disks, boot disk is hardware mirrored (c1t0d0 + c1t1d0)
Created Zpool my_pool as RaidZ using 5 disks + 1 spare:
c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and spare c1t7d0
I am working on alerting & recovery plans for disks failures in the zpool.
As a test, I have pulled disk
2012 Apr 12
2
backup to NTFS USB disk
Hello, *
I am setting up a backup on a Linux system with Windows XP workstations. The
backup goes to three alternating usb drives, each of which is NTFS formatted.
The disks should be virtually identical but they do not seem to be.
First, my mount command is this (I edited a bit for brevity)
mount -t ntfs-3g -o locale=nl_NL.iso-8859-1,silent /dev/disk/by-id/usb-DiskA \
/mnt/tmp ||
mount -t
2008 Jun 07
4
Mixing RAID levels in a pool
Hi,
I had a plan to set up a zfs pool with different raid levels but I ran
into an issue based on some testing I''ve done in a VM. I have 3x 750
GB hard drives and 2x 320 GB hard drives available, and I want to set
up a RAIDZ for the 750 GB and mirror for the 320 GB and add it all to
the same pool.
I tested detaching a drive and it seems to seriously mess up the
entire pool and I
2006 Oct 24
3
determining raidz pool configuration
Hi all,
Sorry for the newbie question, but I''ve looked at the docs and haven''t
been able to find an answer for this.
I''m working with a system where the pool has already been configured and
want to determine what the configuration is. I had thought that''d be
with zpool status -v <poolname>, but it doesn''t seem to agree with the
2008 Apr 02
1
delete old zpool config?
Hi experts
zpool import shows some weird config of an old zpool
bash-3.00# zpool import
pool: data1
id: 7539031628606861598
state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-3C
config:
data1 UNAVAIL insufficient replicas
2008 Jul 02
14
is it possible to add a mirror device later?
Ciao,
the rot filesystem of my thumper is a ZFS with a single disk:
bash-3.2# zpool status rpool
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c5t0d0s0 ONLINE 0 0 0
spares
c0t7d0 AVAIL
c1t6d0 AVAIL
c1t7d0
2007 Jan 11
4
Help understanding some benchmark results
G''day, all,
So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern.
However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2008 Jan 10
2
NCQ
fun example that shows NCQ lowers wait and %w, but doesn''t have
much impact on final speed. [scrubbing, devs reordered for clarity]
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
sd2 454.7 0.0 47168.0 0.0 0.0 5.7 12.6 0 74
sd4 440.7 0.0 45825.9 0.0 0.0 5.5 12.4 0 78
sd6 445.7 0.0
2006 Jan 30
4
Adding a mirror to an existing single disk zpool
Hello All,
I''m transitioning data off my old UFS partitions onto ZFS. I don''t have a lot of duplicate space so I created a zpool, rsync''ed the data from UFS to the ZFS mount and then repartitioned the UFS drive to have partitions that match the cylinder count of the ZFS. The idea here is that once the data is over I wipe out UFS and then attach that partition to the
2009 May 20
2
zfs raidz questions
Hi there,
i''m building a small NAS with 5x1TB Disks. The disks contains at the moment some data, ntfs as the fs and aren''t a raid.
Now my im wondering if its possible to add the parity later. So that i add step by step one disk to the pool. And when i add the last disk, i enable the parity.
(i have only one another 1 tb disk to backup the files)
Thank you for you replies and
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.).
According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2009 Dec 06
20
Accidentally added disk instead of attaching
Hi,
I wanted to add a disk to the tank pool to create a mirror. I accidentally used zpool add ? instead of zpool attach ? and now the disk is added. Is there a way to remove the disk without loosing data? Or maybe change it to mirror?
Thanks,
Martijn
--
This message posted from opensolaris.org
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1.
I want to add "phase 2" which is another 7x1.5tb raidz1
Can I add the second phase to the first phase and basically have two
raid5''s striped (in raid terms?)
Yes, I probably should upgrade the zpool format too. Currently running
snv_104. Also should upgrade to 110.
If that is possible, would anyone happen to have the simple command
lines to
2007 Apr 11
0
raidz2 another resilver problem
Hello zfs-discuss,
One of a disk started to behave strangely.
Apr 11 16:07:42 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1:
Apr 11 16:07:42 thumper-9.srv port 6: device reset
Apr 11 16:07:42 thumper-9.srv scsi: [ID 107833 kern.warning] WARNING: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1/disk at 6,0 (sd27):
Apr 11 16:07:42 thumper-9.srv
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare
support in ZFS. Below you can find a current draft of the proposed
interfaces. This has not yet been submitted for ARC review, but
comments are welcome. Note that this does not include any enhanced FMA
diagnosis to determine when a device is "faulted". This will come in a
follow-on project, of which some
2009 Dec 28
0
[storage-discuss] high read iops - more memory for arc?
Pre-fletching on the file and device level has been disabled yielding good results so far. We''ve lowered the number of concurrent ios from 35 to 1 causing the service times to go even lower (1 -> 8ms) but inflating actv (.4 -> 2ms).
I''ve followed your recommendation in setting primarycache to metadata. I''ll have to check with our tester in the morning if it made
2007 Oct 30
1
Different Sized Disks Recommendation
Hi,
I was first attracted to ZFS (and therefore OpenSolaris) because I thought that ZFS allowed the used of different sized disks in raidz pools without wasted disk space. Further research has confirmed that this isn''t possible--by default.
I have seen a little bit of documentation around using ZFS with slices. I think this might be the answer, but I would like to be sure what the
2009 Dec 24
1
high read iops - more memory for arc?
I''m running into a issue where there seems to be a high number of read iops hitting disks and physical free memory is fluctuating between 200MB -> 450MB out of 16GB total. We have the l2arc configured on a 32GB Intel X25-E ssd and slog on another32GB X25-E ssd.
According to our tester, Oracle writes are extremely slow (high latency).
Below is a snippet of iostat:
r/s w/s
2010 Mar 17
0
checksum errors increasing on "spare" vdev?
Hi,
One of my colleagues was confused by the output of ''zpool status'' on a pool
where a hot spare is being resilvered in after a drive failure:
$ zpool status data
pool: data
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub:
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state
after a drive in a raidz2 vdev has been successfully replaced. In this
particular case drive c0t6d0 was failing so I ran,
zpool offline home/c0t6d0
zpool replace home c0t6d0 c8t1d0
and after the resilvering finished the pool reports a degraded state.
Hopefully this is incorrect. At this point is the vdev in question
now has