similar to: system unresponsive after issuing a zpool attach

Displaying 12 results from an estimated 12 matches similar to: "system unresponsive after issuing a zpool attach"

2008 Oct 30
7
Is there any way to check if DTrace is running or a DTrace probe is enabled?
Hi, I am adding DTrace probes within NFS v3 client. In my current implementation, I use some tsd_*() functions and kmem_zalloc() function. These functions might be heavy and affect the performance. I want to call this function only when DTrace is running or the DTrace probes are enable. So is there a way to check DTrace is running or DTrace probe is enabled? Regards, Danhua
2009 Jan 22
3
Failure to boot from zfs on Sun v880
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi. I am trying to move the root volume from an existing svm mirror to a zfs root. The machine is a Sun V880 (SPARC) running nv_96, with OBP version 4.22.34 which is AFAICT the latest. The svm mirror was constructed as follows / d4 m 18GB d14 d14 s 35GB c1t0d0s0 d24 s 35GB c1t1d0s0 swap d3
2006 Nov 07
6
Best Practices recommendation on x4200
Greetings all- I have a new X4200 that I''m getting ready to deploy. It has four 146 GB SAS drives. I''d like to setup the box for maximum redundancy on the data stored on these drives. Unfortunately, it looks like ZFS boot/root aren''t really options at this time. The LSI Logic controller in this box only supports either a RAID0 array with all four disks, or a RAID 1
2008 Apr 24
0
panic on zfs scrub on builds 79 & 86
This just started happening to me. It''s a striped non mirrored pool (I know I know). A zfs scrub causes a panic under a minute. I can also trigger a panic by doing tars etc. x86 64-bit kernel ... any ideas? Just to help rule out some things, I changed the motherboard, memory and cpu and it still happens ... I also think it happens on a 32-bit kernel. genunix: [ID 335743 kern.notice] BAD
2009 Oct 15
8
sub-optimal ZFS performance
Hello, ZFS is behaving strange on a OSOL laptop, your thoughts are welcome. I am running OSOL on my laptop, currently b124 and i found that the performance of ZFS is not optimal in all situations. If i check the how much space the package cache for pkg(1) uses, it takes a bit longer on this host than on comparable machine to which i transferred all the data. user at host:/var/pkg$ time
2016 Sep 01
2
status of virt-resize support for xfs?
Hey there everyone! Trying to use virt-builder to download & resize the standard fedora-23 image from libguestfs.org ... ??[rsawhill at jiop 11:50 ~] {0} $ rpm -q libguestfs-tools > libguestfs-tools-1.32.7-1.fc23.noarch > ??[rsawhill at jiop 11:50 ~] {0} $ virt-builder --list | grep fedora-23 > fedora-23 x86_64 Fedora? 23 Server > fedora-23 i686
2008 Nov 17
0
Overhead evaluation of my nfsv3client probe implementation
Hi, Thanks for the comment for my nfsv3client probe implementation! I have made changes accordingly. Webrev: http://cr.opensolaris.org/~danhua/webrev/ To reduce the overhead, I use a local variable to save XID, rather than alloc memory space with kmem_zalloc(). According to the overhead caused by tsd_get() and tsd_set(), I did an experiment to measure it. In this experiment, I run a dtrace
2006 Jan 27
2
Do I have a problem? (longish)
Hi, to shorten the story, I describe the situation. I have 4 disks in a zfs/svm config: c2t9d0 9G c2t10d0 9G c2t11d0 18G c2t12d0 18G c2t11d0 is devided in two: selecting c2t11d0 [disk formatted] /dev/dsk/c2t11d0s0 is in use by zpool storedge. Please see zpool(1M). /dev/dsk/c2t11d0s1 is part of SVM volume stripe:d11. Please see metaclear(1M). /dev/dsk/c2t11d0s2 is in use by zpool storedge. Please
2010 Jun 28
23
zpool import hangs indefinitely (retry post in parts; too long?)
Now at 36 hours since zdb process start and: PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 827 root 4936M 4931M sleep 59 0 0:50:47 0.2% zdb/209 Idling at 0.2% processor for nearly the past 24 hours... feels very stuck. Thoughts on how to determine where and why? -- This message posted from opensolaris.org
2005 Nov 19
11
ZFS related panic!
> My current zfs setup lookst like this: > > homepool 3.63G 34.1G 8K /homepool > > homepool/db 61.6M 34.1G 8.50K /var/db > > homepool/db/pgsql 61.5M 34.1G 61.5M > > /var/db/pgsql > > homepool/home 3.57G 34.1G 10.0K /users > > homepool/home/carrie 8K 34.1G 8K > > /users/carrie > >
2016 Sep 01
0
Re: status of virt-resize support for xfs?
On Thu, Sep 01, 2016 at 11:55:50AM -0400, Ryan Sawhill wrote: > Hey there everyone! > > Trying to use virt-builder to download & resize the standard fedora-23 > image from libguestfs.org ... > > ??[rsawhill at jiop 11:50 ~] {0} $ rpm -q libguestfs-tools > > libguestfs-tools-1.32.7-1.fc23.noarch > > ??[rsawhill at jiop 11:50 ~] {0} $ virt-builder --list | grep
2008 Apr 29
24
recovering data from a dettach mirrored vdev
Hi, my system (solaris b77) was physically destroyed and i loosed data saved in a zpool mirror. The only thing left is a dettached vdev from the pool. I''m aware that uberblock is gone and that i can''t import the pool. But i still hope their is a way or a tool (like tct http://www.porcupine.org/forensics/) i can go too recover at least partially some data) thanks in advance for