similar to: Moving drives around...

Displaying 20 results from an estimated 9000 matches similar to: "Moving drives around..."

2010 May 28
21
expand zfs for OpenSolaris running inside vm
hello, all I am have constraint disk space (only 8GB) while running os inside vm. Now i want to add more. It is easy to add for vm but how can i update fs in os? I cannot use autoexpand because it doesn''t implemented in my system: $ uname -a SunOS sopen 5.11 snv_111b i86pc i386 i86pc If it was 171 it would be grate, right? Doing following: o added new virtual HDD (it becomes
2011 Jan 28
2
ZFS root clone problem
(for some reason I cannot find my original thread..so I''m reposting it) I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive. This is in a Netra running Solaris 10. Originally what I did was: zpool attach -f rpool c0t0d0 c0t2d0. Then I did an installboot on c0t2d0s0. Didnt work. I was not able to boot from my second drive (c0t2d0). I cannot remember
2012 Jan 11
3
Unable to allocate dma memory for extra SGL
Hi all; We have a Solaris 10 U9 x86 instance running on Silicon Mechanics / SuperMicro hardware. Occasionally under high load (ZFS scrub for example), the box becomes non-responsive (it continues to respond to ping but nothing else works -- not even the local console). Our only solution is to hard reset after which everything comes up normally. Logs are showing the following: Jan 8
2011 May 19
8
Mapping sas address to physical disk in enclosure
Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with sas-addresses such as this in "zpool status" output: NAME STATE READ WRITE CKSUM cuve ONLINE 0 0 0 mirror-0 ONLINE 0 0 0
2006 Apr 06
15
A few Newbie questions about RAIDZ
1. I have a 4x18GB drive setup as RAIDZ. Now when thinking about it in terms of RAID5 I would expect to get (4-1)x18 worth of drive space, but DF -h shows 4x18. Is this a bug or do I not understand? 2. Once again thinking in RAID5 terms if I have 4X18GB and 12X9GB drives and I want to make a RAIDZ of all of them I would expect the 18GB to be treated at 9GB so the RAIDZ would be 16X9GB. Is
2008 Apr 02
1
delete old zpool config?
Hi experts zpool import shows some weird config of an old zpool bash-3.00# zpool import pool: data1 id: 7539031628606861598 state: FAULTED status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://www.sun.com/msg/ZFS-8000-3C config: data1 UNAVAIL insufficient replicas
2008 Oct 08
1
Troubleshooting ZFS performance with SIL3124 cards
Hi! I have a problem with ZFS and most likely the SATA PCI-X controllers. I run opensolaris 2008.11 snv_98 and my hardware is Sun Netra x4200 M2 with 3 SIL3124 PCI-X with 4 eSATA ports each connected to 3 1U diskchassis which each hold 4 SATA disks manufactured by Seagate model ES.2 (500 and 750) for a total of 12 disks. Every disk has its own eSATA cable connected to the ports on the PCI-X
2010 Sep 07
3
zpool create using whole disk - do I add "p0"? E.g. c4t2d0 or c42d0p0
I have seen conflicting examples on how to create zpools using full disks. The zpool(1M) page uses "c0t0d0" but OpenSolaris Bible and others show "c0t0d0p0". E.g.: zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0 zpool create tank raidz c0t0d0p0 c0t1d0p0 c0t2d0p0 c0t3d0p0 c0t4d0p0 c0t5d0p0 I have not been able to find any discussion on whether (or when) to
2007 Feb 03
4
Which label a ZFS/ZPOOL device has ? VTOC or EFI ?
Hi All, ZPOOL / ZFS commands writes EFI label on a device if we create ZPOOL/ZFS fs on it. Is it true ? I formatted a device with VTOC lable and I created a ZFS file system on it. Now which label the ZFS device has ? is it old VTOC or EFI ? After creating the ZFS file system on a VTOC labeled disk, I am seeing the following warning messages. Feb 3 07:47:00 scoobyb
2009 Dec 02
7
san suport
Hi, i''m having problems attaching disks from a fc-san to a solaris 10 guest. xen host ist a opensolaris box "SunOS node1 5.11 snv_127 i86pc i386 i86xpv". my xen guest is named pg4. this command works fine. virsh attach-disk pg4 /dev/dsk/c8t600A0B800029D69A000013CA4B00E1ABd0 hdb and before i was able to import this volume as a zpool on the xen host - so connection to this
2008 Jun 05
6
slog / log recovery is here!
(From the README) # Jeb Campbell <jebc at c4solutions.net> NOTE: This is last resort if you need your data now. This worked for me, and I hope it works for you. If you have any reservations, please wait for Sun to release something official, and don''t blame me if your data is gone. PS -- This worked for me b/c I didn''t try and replace the log on a running system. My
2007 Sep 27
6
Best option for my home file server?
I was recently evaluating much the same question but with out only a single pool and sizing my disks equally. I only need about 500GB of usable space and so I was considering the value of 4x 250GB SATA Drives versus 5x 160GB SATA drives. I had intended to use an AMS 5 disk in 3 5.25" bay hot-swap backplane. http://www.american-media.com/product/backplane/sata300/sata300.html I priced
2006 Jul 19
1
Q: T2000: raidctl vs. zpool status
Hi all, IHACWHAC (I have a colleague who has a customer - hello, if you''re listening :-) who''s trying to build and test a scenario where he can salvage the data off the (internal ?) disks of a T2000 in case the sysboard and with it the on-board raid controller dies. If I understood correctly, he replaces the motherboard, does some magic to get the raid config back, but even
2009 Feb 12
2
Solaris and zfs versions
We''ve been experimenting with zfs on OpenSolaris 2008.11. We created a pool in OpenSolaris and filled it with data. Then we wanted to move it to a production Solaris 10 machine (generic_137138_09) so I "zpool exported" in OpenSolaris, moved the storage, and "zpool imported" in Solaris 10. We got: Cannot import ''deadpool'': pool is formatted
2010 Aug 24
7
SCSI write retry errors on ZIL SSD drives...
I posted a thread on this once long ago[1] -- but we''re still fighting with this problem and I wanted to throw it out here again. All of our hardware is from Silicon Mechanics (SuperMicro chassis and motherboards). Up until now, all of the hardware has had a single 24-disk expander / backplane -- but we recently got one of the new SC847-based models with 24 disks up front and 12 in the
2007 Sep 28
5
ZFS Boot Won''t work with a straight or mirror zfsroot
Using build 70, I followed the zfsboot instructions at http:// www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ to the letter. I tried first with a mirror zfsroot, when I try to boot to zfsboot the screen is flooded with "init(1M) exited on fatal signal 9" Than I tried with a simple zfs pool (not mirrored) and it just reboots right away. If I try to setup grub
2007 Aug 14
2
restore lost pool after vtoc re-label
hi all, i''ve been using a SAN LUN as the sole member of a zpool with one additional zfs filesystem. this is a flat SAN fabric, so this LUN was available to other systems on the fabric, and one of them came up with "wrong magic number" for several drives, and, as best i can tell, the vtoc for my zpool LUN was over-written on that host via format labeling to correct the error.
2006 Oct 24
3
determining raidz pool configuration
Hi all, Sorry for the newbie question, but I''ve looked at the docs and haven''t been able to find an answer for this. I''m working with a system where the pool has already been configured and want to determine what the configuration is. I had thought that''d be with zpool status -v <poolname>, but it doesn''t seem to agree with the
2009 Sep 26
5
raidz failure, trying to recover
Long story short, my cat jumped on my server at my house crashing two drives at the same time. It was a 7 drive raidz (next time ill do raidz2). The server crashed complaining about a drive failure, so i rebooted into single user mode not realizing that two drives failed. I put in a new 500g replacement and had zfs start a replace operation which failed at about 2% because there was two broken
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare support in ZFS. Below you can find a current draft of the proposed interfaces. This has not yet been submitted for ARC review, but comments are welcome. Note that this does not include any enhanced FMA diagnosis to determine when a device is "faulted". This will come in a follow-on project, of which some