search for: pool0

Displaying 13 results from an estimated 13 matches for "pool0".

Did you mean: pool
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate of about 400K/s. (When this pool was first set up we saw rates in the MB/s range during a scrub). Both zpool iostat and an iostat -Xn show lots of idle disk times, no above average service times, no abnormally high busy percentages. Load on the box is .59. 8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2009 Mar 01
8
invalid vdev configuration after power failure
...at does it mean for a vdev to have an invalid configuration and how can it be fixed or reset? As you can see, the following pool can no longer be imported: (Note that the "last accessed by another system" warning is because I moved these drives to my test workstation.) ~$ zpool import -f pool0 cannot import ''pool0'': invalid vdev configuration ~$ zpool import pool: pool0 id: 5915552147942272438 state: UNAVAIL status: The pool was last accessed by another system. action: The pool cannot be imported due to damaged devices or data. see: http://www.sun.com/msg/ZFS...
2008 Jan 31
16
Hardware RAID vs. ZFS RAID
Hello, I have a Dell 2950 with a Perc 5/i, two 300GB 15K SAS drives in a RAID0 array. I am considering going to ZFS and I would like to get some feedback about which situation would yield the highest performance: using the Perc 5/i to provide a hardware RAID0 that is presented as a single volume to OpenSolaris, or using the drives separately and creating the RAID0 with OpenSolaris and ZFS? Or
2006 Sep 22
1
Linux Dom0 <-> Solaris prepared Volume
...license terms. DEBUG enabled xen v3.0-unstable chgset ''Tue Aug 15 19:53:55 2006 +0100 11134:ec03b24a2d83'' WARNING: Found xen v3.0-unstable but need xen v3.0.2-3-sun WARNING: The kernel may not function correctly ==== : : : ==== pseudo-device: pm0 pm0 is /pseudo/pm@0 pseudo-device: pool0 pool0 is /pseudo/pool@0 panic[cpu0]/thread=fffffffec48d8c00: stisti >> warning! 8-byte aligned %fp = ffffff0001326fa8 ffffff0001326fa8 unix:sys_syscall+4c () syncing file systems... 2 2 done skipping system dump - no dump device configured rebooting... panic after init started - SHUTDOWN...
2014 Dec 18
1
Virtual machine removal through command line.
Hi, Until today, I hadn't found a way to cleanly remove a KVM virtual machine through command line on CentOS 6 or 7! I had to run 'systemctl restart libvirtd' or 'service libvirtd restart' After several months (!!!), I found this thread: https://github.com/pradels/vagrant-libvirt/issues/107 Now, I know how to cleanly remove a KVM virtual machine (with default file location):
2007 Apr 30
4
B62 AHCI and ZFS
...weston genunix: [ID 936769 kern.notice] winlock0 is /pseudo/winl ock at 0 Apr 27 09:30:10 weston pseudo: [ID 129642 kern.notice] pseudo-device: rsm0 Apr 27 09:30:10 weston genunix: [ID 936769 kern.notice] rsm0 is /pseudo/rsm at 0 Apr 27 09:30:10 weston pseudo: [ID 129642 kern.notice] pseudo-device: pool0 Apr 27 09:30:10 weston genunix: [ID 936769 kern.notice] pool0 is /pseudo/pool at 0 Apr 27 09:30:10 weston ipf: [ID 774698 kern.info] IP Filter: v4.1.9, running. Apr 27 09:30:16 weston rpcbind: [ID 489175 daemon.error] Unable to join IPv6 mul ticast group for rpc broadcast FF02::202 Apr 27 09:30:27...
2014 Jul 07
0
mount time of multi-disk arrays
...15.2-1-ARCH - 32GB RAM - dev 1-4 are 4TB Seagate ST4000DM000 (5900rpm) - dev 5 is a 4TB Wstern Digital WDC WD40EFRX (5400rpm) Thanks in advance André-Sebastian Liebe -------------------------------------------------------------------------------------------------- # btrfs fi sh Label: 'apc01_pool0' uuid: 066141c6-16ca-4a30-b55c-e606b90ad0fb Total devices 5 FS bytes used 14.21TiB devid 1 size 3.64TiB used 2.86TiB path /dev/sdd devid 2 size 3.64TiB used 2.86TiB path /dev/sdc devid 3 size 3.64TiB used 2.86TiB path /dev/sdf devid 4 size 3....
2013 Sep 12
1
Doc: How to use NPIV in libvirt
...uest. To use it as qemu emulated disk, specifying the "device" attribute as "device='disk|cdrom|floppy'". E.g. <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='blk-pool0' volume='blk-pool0-vol0'/> <target dev='hda' bus='ide'/> </disk> Or (using the LUN's path directly) <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <...
2006 Oct 05
13
Unbootable system recovery
I have just recently (physically) moved a system with 16 hard drives (for the array) and 1 OS drive; and in doing so, I needed to pull out the 16 drives so that it would be light enough for me to lift. When I plugged the drives back in, initially, it went into a panic-reboot loop. After doing some digging, I deleted the file /etc/zfs/zpool.cache. When I try to import the pool using the zpool
2008 Jan 05
11
Help with booting dom0 on a Dell 2950
Hi, I have installed b_78 on a Dell 2950 and booting to bare metal works fine but when I try to boot using the grub entry Solaris xVM it will boot to the point where it displays the uname info and then just stays there. It will not boot past that point. I have enabled VT technology in the BIOS (but only after the installation). Where/what can I look at to trouble shoot this? I am new to xen and
2004 Aug 14
1
linux client not working properly
...mode = 0777 force group = todos force user = todos write list = @abogados, @adm_anc, @adm_rh, @archivos, @cobros, @comunicacion, @contabilidad, @judicial, @marcas, @migracion, @naves, @opadrmh, @pool, @receptel, @recursosh, @secresoc, @serv_grl, @sistemas, @sociedades, @socios, @traducc, secsoc52, pool0, pool1, pool2, pool3, pool7, pool5, pooladm, secsoc55, secsoc51, secsoc50, secsoc63 # The following two entries demonstrate how to share a directory so that two # users can place files there that will be owned by the specific users. In this # setup, the directory should be writable by both users an...
2008 Jan 17
9
ATA UDMA data parity error
...17 06:42:15 san IMPACT: Automated diagnosis and response for these events will not occur. Jan 17 06:42:15 san REC-ACTION: Run pkgchk -n SUNWfmd to ensure that fault management software is installed properly. Contact Sun for support. Jan 17 06:42:17 san pseudo: [ID 129642 kern.info] pseudo-device: pool0 Jan 17 06:42:17 san genunix: [ID 936769 kern.info] pool0 is /pseudo/pool at 0 Jan 17 06:42:34 san sendmail[885]: [ID 702911 mail.alert] unable to qualify my own domain name (san) -- using short name Jan 17 06:42:34 san sendmail[884]: [ID 702911 mail.alert] unable to qualify my own domain name (sa...
2007 Dec 09
8
zpool kernel panics.
Hi Folks, I''ve got a 3.9 Tb zpool, and it is casing kernel panics on my Solaris 10 280r (SPARC) server. The message I get on panic is this: panic[cpu1]/thread=2a100a95cc0: zfs: freeing free segment (offset=423713792 size=1024) This seems to come about when the zpool is being used or being scrubbed - about twice a day at the moment. After the reboot, the scrub seems to have