Displaying 20 results from an estimated 400 matches similar to: "Drive id confusion"
2010 Feb 27
1
slow zfs scrub?
hi all
I have a server running svn_131 and the scrub is very slow. I have a cron job for starting it every week and now it''s been running for a while, and it''s very, very slow
scrub: scrub in progress for 40h41m, 12.56% done, 283h14m to go
The configuration is listed below, consisting of three raidz2 groups with seven 2TB drives each. The root fs is on a pair of X25M (gen 1)
2010 Oct 19
8
Balancing LVOL fill?
Hi all
I have this server with some 50TB disk space. It originally had 30TB on WD Greens, was filled quite full, and another storage chassis was added. Now, space problem gone, fine, but what about speed? Three of the VDEVs are quite full, as indicated below. VDEV #3 (the one with the spare active) just spent some 72 hours resilvering a 2TB drive. Now, those green drives suck quite hard, but not
2006 Oct 05
1
solaris-supported 8-port PCI-X SATA controller
I''ve lucked into some big disks, so I''m thinking of biting the bullet
(screaming loudly in the process) and superceding the SATA controllers
on my motherboard with something that will work with hot-swap in
Solaris. (did I mention before I''m still pissed about this?) I have
enough to populate all 8 bays (meaning adding 4 disks to what I have
now), so the 6 ports on the
2010 Nov 06
10
Apparent SAS HBA failure-- now what?
My setup: A SuperMicro 24-drive chassis with Intel dual-processor
motherboard, three LSI SAS3081E controllers, and 24 SATA 2TB hard drives,
divided into three pools with each pool a single eight-disk RAID-Z2. (Boot
is an SSD connected to motherboard SATA.)
This morning I got a cheerful email from my monitoring script: "Zchecker has
discovered a problem on bigdawg." The full output is
2006 Sep 11
7
installing a pseudo driver in a Solaris DOM U and DOM U reboot
Hello,
on a v20z, we have as DOM 0 a Solaris XEN on snv44 64bits
and we have as DOM U a Solaris XEN on snv44 64 bits.
We then install a pseudo driver in the Solaris DOM 1 XEN snv44:
installation is ok and driver works as expected.
But on reboot of DOM 1, the driver is no more
there (in modinfo, driver not found).
Is there something special to do after a pseudo driver installation in
a Solaris
2008 Nov 01
2
st devices not showing up in xvm
I have a Sony lib-162 tape library hooked up to an adaptec 39160 scsi controller.
I''m running OpenSolaris / Indiana snv_98 with the CADP160 driver installed from
a SXCE dvd (snv_99). When I boot without xvm, I can run "mt config" and see the
configuration information for the first drive in the library. Under xvm, some scsi
operations still work - I can access the changer, move
2010 Jun 30
1
zfs rpool corrupt?????
Hello,
Has anyone encountered the following error message, running Solaris 10 u8 in
an LDom.
bash-3.00# devfsadm
devfsadm: write failed for /dev/.devfsadm_dev.lock: Bad exchange descriptor
bash-3.00# zpool status -v rpool
pool: rpool
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in
2013 Nov 09
3
usb ups on openindiana
Hello all,
I am trying to set up my brother's UPS (remotely over internet)
on an OpenIndiana-based storage box. According to him, the UPS is
probably an Ippon Back Power Pro 600 dated around 2003 (battery
recently replaced), and it has an USB connection.
Sorry for a relatively long post with a log of my successes and
failures, in the hopes that someone would point out what I did
wrong.
2005 Oct 11
7
dtrace: failed to initialize dtrace: DTrace device not available on system
I have a number of systems running solaris10 and i see the package and binary for dtrace installed however whenever we try to run anything we get this error
dtrace: failed to initialize dtrace: DTrace device not available on system
the only system in which i dont have this error is the development server that has the full solaris 10 install while others are minimized, do i need additional
2009 Jul 23
1
why is zpool import still hanging in opensolaris 2009.06 ??? no fix yet ???
Follow-up : happy end ...
It took quite some thinkering but... i have my data back...
I ended up starting without the troublesome zfs storage array, de-installed the iscsitartget software and re-installed it...just to have solaris boot without complaining about missing modules...
That left me with a system that would boot as long as the storage was disconnected... Reconnecting it made the boot
2008 May 19
3
How to build Xen with on-src-b79
I am trying to build Xen from on-src-b79. According to opensolaris website, all xVM sources should be in the on-src-b79 tree, and no additional sources are required. I followed the procedure for doing the nightly build, and I expected that, as a result I will build, along with the Solaris kernel, the xen.gz image. I was hoping to copy that xen.gz image to my test machine, so I can boot with my own
2006 Oct 31
0
6317254 missing lock_dev()/unlock_dev() in devfsadm event_handler()
Author: cth
Repository: /hg/zfs-crypto/gate
Revision: 855003e47284a1376b3ce65d9f3629b60711fed8
Log message:
6317254 missing lock_dev()/unlock_dev() in devfsadm event_handler()
Files:
update: usr/src/cmd/devfsadm/devfsadm.c
2010 Oct 16
4
resilver question
Hi all
I''m seeing some rather bad resilver times for a pool of WD Green drives (I know, bad drives, but leave that). Does resilver go through the whole pool or just the VDEV in question?
--
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres
2007 Dec 31
4
Help! ZFS pool is UNAVAILABLE
Hi All,
I posted this in a different threat, but it was recommended that I post in this one.
Basically, I have a 3 drive raidz array on internal Seagate drives. running build 64nv. I purchased 3 add''l USB drives with the intention of mirroring and then migrating the data to the new USB drives.
I accidentally added the 3 USB drives in a raidz to my original storage pool, so now I have 2
2006 Jul 18
1
file access algorithm within pools
Hello,
What is the access algorithm used within multi-component pools for a
given pool, and does it change when one or more members of the pool
become degraded ?
examples:
zpool create mtank mirror c1t0d0 c2t0d0 mirror c3t0d0 c4t0d0 mirror
c5t0d0 c6t0d0
or;
zpool create ztank raidz c1t0d0 c2t0d0 c3t0d0 raidz c4t0d0 c5t0d0 c6t0d0
As files are created on the filesystem within these pools,
2006 Oct 31
0
PSARC/2002/762 Layered Trusted Solaris
Author: jpk
Repository: /hg/zfs-crypto/gate
Revision: e7e07b2f4fcfbe725493f4074f9e9f0d8bfd8e1c
Log message:
PSARC/2002/762 Layered Trusted Solaris
PSARC/2005/060 TSNET: Trusted Networking with Security Labels
PSARC/2005/259 Layered Trusted Solaris Label Interfaces
PSARC/2005/573 Solaris Trusted Extensions for Printing
PSARC/2005/691 Trusted Extensions for Device Allocation
PSARC/2005/723 Solaris
2006 Oct 31
1
ZFS thinks my 7-disk pool has imaginary disks
Hi all,
I recently created a RAID-Z1 pool out of a set of 7 SCSI disks, using the
following command:
# zpool create magicant raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0
c5t6d0
It worked fine, but I was slightly confused by the size yield (99 GB vs the
116 GB I had on my other RAID-Z1 pool of same-sized disks).
I thought one of the disks might have been to blame, so I tried swapping it
out
2008 Oct 08
1
Troubleshooting ZFS performance with SIL3124 cards
Hi!
I have a problem with ZFS and most likely the SATA PCI-X controllers.
I run
opensolaris 2008.11 snv_98 and my hardware is Sun Netra x4200 M2 with
3 SIL3124 PCI-X with 4 eSATA ports each connected to 3 1U diskchassis
which each hold 4 SATA disks manufactured by Seagate model ES.2
(500 and 750) for a total of 12 disks. Every disk has its own eSATA
cable
connected to the ports on the PCI-X
2009 Jun 22
7
SPARC SATA, please.
Is there a card for OpenSolaris 2009.06 SPARC that will do SATA correctly yet? Need it for a super cheapie, low expectations, SunBlade 100 filer, so I think it has to be notched for 5v PCI slot, iirc. I''m OK with slow -- main goals here are power saving (sleep all 4 disks) and 1TB+ space. Oh, and I hate to be an old head, but I don''t want a peecee. They still scare me :)
2006 Oct 31
0
6304001 devfsadmd removes the wrong device links from
Author: cth
Repository: /hg/zfs-crypto/gate
Revision: 500ef11aa236a31cac035eae71de8fe663a498be
Log message:
6304001 devfsadmd removes the wrong device links from
Files:
update: usr/src/cmd/devfsadm/devfsadm.c