similar to: 100% kernel usage

Displaying 20 results from an estimated 4000 matches similar to: "100% kernel usage"

2007 Oct 30
1
Different Sized Disks Recommendation
Hi, I was first attracted to ZFS (and therefore OpenSolaris) because I thought that ZFS allowed the used of different sized disks in raidz pools without wasted disk space. Further research has confirmed that this isn''t possible--by default. I have seen a little bit of documentation around using ZFS with slices. I think this might be the answer, but I would like to be sure what the
2008 Mar 12
3
Mixing RAIDZ and RAIDZ2 zvols in the same zpool
I have a customer who has implemented the following layout: As you can see, he has mostly raidz zvols but has one raidz2 in the same zpool. What are the implications here? Is this a bad thing to do? Please elaborate. Thanks, Scott Gaspard Scott.J.Gaspard at Sun.COM > NAME STATE READ WRITE CKSUM > > chipool1 ONLINE 0 0 0 > >
2009 Oct 01
4
RAIDZ v. RAIDZ1
So, I took four 1.5TB drives and made RAIDZ, RAIDZ1 and RAIDZ2 pools. The sizes for the pools were 5.3TB, 4.0TB, and 2.67TB respectively. The man page for RAIDZ states that "The raidz vdev type is an alias for raidz1." So why was there a difference between the sizes for RAIDZ and RAIDZ1? Shouldn''t the size be the same for "zpool create raidz ..." and "zpool
2009 Jul 07
0
borrow disk from raidz?
Hi all! I got a short question regarding data migration: I want to copy my data (~2TB) from an old machine to a new machine with a new raidz1 (6 disks x 1,5TB each ). Unfortunately this is not working properly via network due to various (driver) problems on the old machine. So my idea was: to borrow one disk from the new raidz1 (leaving the raid in degraded, but working status) attach the now
2008 Jul 28
1
zpool status my_pool , shows a pulled disk c1t6d0 as ONLINE ???
New server build with Solaris-10 u5/08, on a SunFire t5220, and this is our first rollout of ZFS and Zpools. Have 8 disks, boot disk is hardware mirrored (c1t0d0 + c1t1d0) Created Zpool my_pool as RaidZ using 5 disks + 1 spare: c1t2d0, c1t3d0, c1t4d0, c1t5d0, c1t6d0, and spare c1t7d0 I am working on alerting & recovery plans for disks failures in the zpool. As a test, I have pulled disk
2007 Sep 14
3
Convert Raid-Z to Mirror
Is there a way to convert a 2 disk raid-z file system to a mirror without backing up the data and restoring? We have this: bash-3.00# zpool status pool: archives state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM archives ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0
2010 Apr 24
3
ZFS RAID-Z2 degraded vs RAID-Z1
Had an idea, could someone please tell me why it''s wrong? (I feel like it has to be). A RaidZ-2 pool with one missing disk offers the same failure resilience as a healthy RaidZ1 pool (no data loss when one disk fails). I had initially wanted to do single parity raidz pool (5disk), but after a recent scare decided raidz2 was the way to go. With the help of a sparse file
2010 Mar 17
0
checksum errors increasing on "spare" vdev?
Hi, One of my colleagues was confused by the output of ''zpool status'' on a pool where a hot spare is being resilvered in after a drive failure: $ zpool status data pool: data state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub:
2008 Jun 07
4
Mixing RAID levels in a pool
Hi, I had a plan to set up a zfs pool with different raid levels but I ran into an issue based on some testing I''ve done in a VM. I have 3x 750 GB hard drives and 2x 320 GB hard drives available, and I want to set up a RAIDZ for the 750 GB and mirror for the 320 GB and add it all to the same pool. I tested detaching a drive and it seems to seriously mess up the entire pool and I
2007 Jan 11
4
Help understanding some benchmark results
G''day, all, So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern. However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2009 Nov 17
1
upgrading to the latest zfs version
Hi guys, after reading the mailings yesterday i noticed someone was after upgrading to zfs v21 (deduplication) i''m after the same, i installed osol-dev-127 earlier which comes with v19 and then followed the instructions on http://pkg.opensolaris.org/dev/en/index.shtml to bring my system up to date, however, the system is reporting no updates are available and stays at zfs v19, any ideas?
2007 Mar 07
0
anyone want a Solaris 10u3 core file...
I executed sync just before this happened.... ultra:ultra# mdb -k unix.0 vmcore.0 Loading modules: [ unix krtld genunix specfs dtrace ufs sd pcipsy md ip sctp usba fctl nca crypto zfs random nfs ptm cpc fcip sppp lofs ] > $c vpanic(7b653bd8, 7036fca0, 7036fc70, 7b652990, 0, 60002d0b480) zio_done+0x284(60002d0b480, 0, a8, 7036fca0, 0, 60000b08d80) zio_vdev_io_assess+0x178(60002d0b480, 8000,
2007 Dec 31
4
Help! ZFS pool is UNAVAILABLE
Hi All, I posted this in a different threat, but it was recommended that I post in this one. Basically, I have a 3 drive raidz array on internal Seagate drives. running build 64nv. I purchased 3 add''l USB drives with the intention of mirroring and then migrating the data to the new USB drives. I accidentally added the 3 USB drives in a raidz to my original storage pool, so now I have 2
2012 Dec 30
4
Expanding a raidz vdev in zpool
Hello All, I have a zpool that consists of 2 raidz vdevs (raidz1-0 and raidz1-1). The first vdev is 4 1.5TB drives. The second was 4 500GB drives. I replaced the 4 500GB drives with 4 3TB drives. I replaced one at time, and resilvered each. Now the process is complete, I expected to have an extra 10TB (4*2.5TB) of raw space, but it''s still the same amount of space. I did an export and
2006 Oct 31
1
ZFS thinks my 7-disk pool has imaginary disks
Hi all, I recently created a RAID-Z1 pool out of a set of 7 SCSI disks, using the following command: # zpool create magicant raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0 c5t6d0 It worked fine, but I was slightly confused by the size yield (99 GB vs the 116 GB I had on my other RAID-Z1 pool of same-sized disks). I thought one of the disks might have been to blame, so I tried swapping it out
2009 Jul 20
1
zpool import problem / missing label / corrupted data
After a power outage due to a thunder storm my 3 disk raidz1 pool has become UNAVAILable. It is a ZFV v13 pool using the whole 3 disks created on FreeBSD current 8 x64 and worked well for over a month. Unfortunately I wasn''t able to import the pool with neither a FreeBSD LiveCD or the current OpenSolaris LiveCD x86/x64. When I tried to import the pool with FreeBSD the system just hangs (I
2010 Jul 25
1
VMGuest IOMeter numbers
Hello, first time posting. I''ve been working with zfs on and off with limited *nix experience for a year or so now, and have read a lot of things by a lot of you I''m sure. Still tons I don''t understand/know I''m sure. We''ve been having awful IO latencies on our 7210 running about 40 VM''s spread over 3 hosts, no SSD''s / Intent Logs.
2013 Mar 23
0
Dirves going offline in Zpool
Hi, I have Dell md1200 connected to two heads ( Dell R710 ). The heads have Perc H800 card and drives are configured in Raid0 ( Virtual Disk) in the RAID controller. One of the drives had crashed and is replaced by a spare. Resilvering was triggered but fails to complete due to drives going offline. I have to reboot the head ( R710) and drives comes online. This happened repeatedly when
2008 Oct 15
3
Linux HVM install in Solaris Dom-0 talking 36+ hours
Hi! I''m running svn_99 and just tried to install Fedora using the following script: #!/bin/sh ISO=nfs:dom0:/storage/misc/Downloads/ISOs/Linux/Fedora9/Fedora-9-x86_64 zfs create -V 20G storage/fedora virt-install \ --hvm \ --os-type linux \ --os-variant fedora8 \ -n fedora \ -f /dev/zvol/dsk/storage/fedora \ -l ${ISO} \ --vnc \
2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
I made a bad judgment and now my raidz pool is corrupted. I have a raidz pool running on Opensolaris b85. I wanted to try out freenas 0.7 and tried to add my pool to freenas. After adding the zfs disk, vdev and pool. I decided to back out and went back to opensolaris. Now my raidz pool will not mount and got the following errors. Hope someone expert can help me recover from this error.