Displaying 8 results from an estimated 8 matches for "initialprogramload".
2008 Dec 26
19
separate home "partition"?
(i use the term loosely because i know that zfs likes whole volumes better)
when installing ubuntu, i got in the habit of using a separate partition for my home directory so that my data and gnome settings would all remain intact when i reinstalled or upgraded.
i''m running osol 2008.11 on an ultra 20, which has only two drives. i''ve got all my data located in my home directory,
2009 Jan 13
6
mirror rpool
Hi
Host: VirtualBox 2.1.0 (WinXP SP3)
Guest: OSol 5.11snv_101b
IDE Primary Master: 10 GB, rpool
IDE Primary Slave: 10 GB, empty
format output:
AVAILABLE DISK SELECTIONS:
0. c3d0 <DEFAULT cyl 1302 alt 2 hd 255 sec 63>
/pci0,0/pci-ide at 1,1/ide at 0/cmdk at 0,0
1. c3d1 <drive unknown>
/pci0,0/pci-ide at 1,1/ide at 0/cmdk at 1,0
# ls
2008 Aug 08
1
[install-discuss] lucreate into New ZFS pool
...ew95> successful.
Creation of boot environment <new95> successful.
real 35:48.77
user 2:38.00
sys 6:12.22
--
Any sufficiently advanced technology is indistinguishable from magic.
Arthur C. Clarke
Afrikaanse Stap Website: http://www.bloukous.co.za
My blog: http://initialprogramload.blogspot.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080808/e73f3e91/attachment.html>
2008 Oct 23
2
zpool cross mount
Hi experts,
Short question
What happen if we have cross zpool mount ?
meaning :
zpool A -> should be mounted in /A
zpool B -> should be mounted in /A/B
=> is there in zfs an automatic mechanism during solaris 10 boot that
prevent the import of pool B ( mounted /A/B ) before trying to import A
pool or do we have to legacy mount and file /etc/vfstab
Regards,
Laurent
--
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool.
root@:/$ zpool status
panic[cpu1]/thread=fffffe8000758c80: assertion failed:
2008 Jul 02
14
is it possible to add a mirror device later?
Ciao,
the rot filesystem of my thumper is a ZFS with a single disk:
bash-3.2# zpool status rpool
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c5t0d0s0 ONLINE 0 0 0
spares
c0t7d0 AVAIL
c1t6d0 AVAIL
c1t7d0
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card
I got errors on all drives that result from SCSI timeout errors.
yoda:~ # tail -f /var/adm/messages
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]
Requested Block: 239683776 Error Block: 239683776
Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor:
Seagate
2008 Aug 02
13
are these errors dangerous
Hi everyone,
I''ve been running a zfs fileserver for about a month now (on snv_91) and
it''s all working really well. I''m scrubbing once a week and nothing has
come up as a problem yet.
I''m a little worried as I''ve just noticed these messages in
/var/adm/message and I don''t know if they''re bad or just informational:
Aug 2 14:46:06