Displaying 20 results from an estimated 10000 matches similar to: "power disruption"
2008 Jul 26
1
Expanding a root pool
I''m attempting to expand a root pool for a VMware VM that is on an 8GB virtual disk. I mirrored it to a 20GB disk and detached the 8GB disk. I did "installgrub" to install grub onto the second virtual disk, but I get a kernel panic when booting. Is there an extra step I need to perform to get this working?
Basically I did this:
1. Created a new 20GB virtual disk.
2. Booted
2009 Dec 21
0
Mirror config and installgrub errors
I''ve hsut bought second drive for my hope PC and decided to do mirror. I''ve made
pfexec zpool attach rpool c9d0s0 c13d0s0
waited for scrub and tried to install grub on second disk:
$ pfexec installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c13d0s0
cannot open/stat device /dev/rdsk/c13d0s2
$ pfexec installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c13d0
raw
2009 Dec 04
2
USB sticks show on one set of devices in zpool, different devices in format
Hello,
I had snv_111b running for a while on a HP DL160G5. With two 16GB USB sticks comprising the mirrored rpool for boot. And four 1TB drives comprising another pool, pool1, for data.
So that''s been working just fine for a few months. Yesterday I get it into my mind to upgrade the OS to latest, then was snv_127. That worked, and all was well. Also did an upgrade to the
2008 Aug 26
5
Problem w/ b95 + ZFS (version 11) - seeing fair number of errors on multiple machines
Hi,
After upgrading to b95 of OSOL/Indiana, and doing a ZFS upgrade to the newer
revision, all arrays I have using ZFS mirroring are displaying errors. This
started happening immediately after ZFS upgrades. Here is an example:
ormandj at neutron.corenode.com:~$ zpool status
pool: rpool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was
2011 Jul 04
0
Problems installing grub
Hi all
One of the rpool drives on this server died the other day, so I got a replacement that was 1 cylinder larger (60798 vs 60797). Still, I tried
prtvtoc /dev/rdsk/c7d0s2 | fmthard -s - /dev/rdsk/c6d0s2
zpool replace worked and the pool resilvered within a few minutes. Now, installing grub fails.
root at prv-backup:~# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c6d0s0
2006 Oct 31
0
6273535 SUNWgrub* packages have incorrect pkginfo(4) settings
Author: jongkis
Repository: /hg/zfs-crypto/gate
Revision: 8e73d99dab0952293f379aca3e88188d768b3a0b
Log message:
6273535 SUNWgrub* packages have incorrect pkginfo(4) settings
6305469 READ_FAIL_STAGE2 in installgrub/message.h contains wrong message
6305481 installgrub miscounts stage2 size in sector by 1 if stage2 size is multiple of 512
6307439 make clobber in usr/src/grub and
2007 Jul 27
0
cloning disk with zpool
Hello the list,
I thought that it should be easy to do a clone (not in the term of zfs) of a disk with zpool. This manipulation is strongly inspired by
http://www.opensolaris.org/jive/thread.jspa?messageID=135038
and
http://www.opensolaris.org/os/community/zfs/boot/
But unfortunately this doesn''t work, and we do have no clue what could be wrong
on c1d0 you have a zfs root
create a
2010 May 28
21
expand zfs for OpenSolaris running inside vm
hello, all
I am have constraint disk space (only 8GB) while running os inside vm. Now i
want to add more. It is easy to add for vm but how can i update fs in os?
I cannot use autoexpand because it doesn''t implemented in my system:
$ uname -a
SunOS sopen 5.11 snv_111b i86pc i386 i86pc
If it was 171 it would be grate, right?
Doing following:
o added new virtual HDD (it becomes
2011 Aug 08
2
rpool recover not using zfs send/receive
Is it possible to recover the rpool with only a tar/star archive of the root filesystem? I have used the zfs send/receive methods and that work without a problem.
What I am trying to do is recreate the rpool and underlying zfs filesystems (rpool/ROOT, rpool/s10_uXXXXXX, rpool/dump, rpool/swap, rpool/export, and rpool/export/home). I then mount the pool at a alternate root and restore the tar
2008 Aug 04
16
zpool upgrade wrecked GRUB
Machine is running x86 snv_94 after recent upgrade from opensolaris 2008.05. ZFS and zpool reported no troubles except suggesting upgrade for from ver.10 to ver.11. seemed like a good idea at the time. system up for several days after that point then took down for some unrelated maintenance.
now will not boot the opensol, drops to grub prompt, no menus.
zfs was mirrored on two disks c6d0s0 and
2011 Jul 22
4
add device to mirror rpool in sol11exp
In my new oracle server, sol11exp, it''s using multipath device names...
Presently I have two disks attached: (I removed the other 10 disks for now,
because these device names are so confusing. This way I can focus on *just*
the OS disks.)
0. c0t5000C5003424396Bd0 <SEAGATE-ST32000SSSUN2.0-0514 cyl 3260 alt 2
hd 255 sec 252>
/scsi_vhci/disk at g5000c5003424396b
2010 Aug 28
0
zfs-discuss Digest, Vol 58, Issue 117
>> hi all
>> Try to learn how UFS root to ZFS root liveUG work.
>>
>> I download the vbox image of s10u8, it come up as UFS root.
>> add a new disks (16GB)
>> create zpool rpool
>> run lucreate -n zfsroot -p rpool
>> run luactivate zfsroot
>> run lustatus it do show zfsroot will be active in next boot
>> init 6
>> but it come up
2007 Sep 28
5
ZFS Boot Won''t work with a straight or mirror zfsroot
Using build 70, I followed the zfsboot instructions at http://
www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ to the
letter.
I tried first with a mirror zfsroot, when I try to boot to zfsboot
the screen is flooded with "init(1M) exited on fatal signal 9"
Than I tried with a simple zfs pool (not mirrored) and it just
reboots right away.
If I try to setup grub
2011 Nov 22
3
SUMMARY: mounting datasets from a read-only pool with aid of tmpfs
Hello all,
I''d like to report a tricky situation and a workaround
I''ve found useful - hope this helps someone in similar
situations.
To cut the long story short, I could not properly mount
some datasets from a readonly pool, which had a non-"legacy"
mountpoint attribute value set, but the mountpoint was not
available (directory absent or not empty). In this case
2009 Mar 06
5
RePartition OS disk, give some to zpool
I''ve got knee deep into learning how to use Opensolaris and zfs, and I
see now that my goal of home zfs server may have been better served if
I had partitioned the install disk leaving some of the 60GB to be
added to a zpool.
First, how much space does a working OS need. I don''t mean bare
minimum but to be comfortable and have some growing room (on the
install disk)?
2008 Sep 13
3
Restore a ZFS Root Mirror
Hi all,
after installing OpenSolaris 2008.05 in VirtualBox I''ve created a ZFS Root Mirror by:
"zfs attach rpool Disk B"
and it works like a charm. Now I tried to restore the rpool from the worst Case
Scenario: The Disk the System was installed to (Disk A) fails.
I replaced the Disk A with another virtual Disk C and tried to restore the rpool, but
my Problem is that I
2015 Aug 05
2
CentOS 5 grub boot problem
I never thought I'd say this, but I think it's easier to do this with
GRUB 2. Anyway I did an installation to raid1's in CentOS 6's
installer, which still uses GRUB legacy. I tested removing each of the
two devices and it still boots. These are the commands in its log:
: Running... ['/sbin/grub-install', '--just-copy']
: Running... ['/sbin/grub',
2015 Aug 06
2
CentOS 5 grub boot problem
On 8/6/2015 4:39 PM, Chris Murphy wrote:
> On Thu, Aug 6, 2015 at 2:29 PM, Bowie Bailey <Bowie_Bailey at buc.com> wrote:
>> On 8/6/2015 4:21 PM, Chris Murphy wrote:
>>> On Thu, Aug 6, 2015 at 2:08 PM, Bowie Bailey <Bowie_Bailey at buc.com> wrote:
>>>
>>>> Doing a new install on the two 1TB drives is my current plan. If that
>>>>
2005 Dec 02
1
FIXED Re: Re: MD Raid 1 software device not booting not even reaching grub
doing that grub-install /dev/sda will give me the "corresponding BIOS
device" error.
But now I fixed it by doing a manual grub install.
first boot with cd1 and type linux rescue at the prompt
when you're at the linux prompt after detecting and mounting the
partitions, do a "chroot /mnt/sysimage"
then
# grub --batch
#grub> root (hd0,0)
Filesystem type is ext2fs,
2010 Feb 18
2
Killing an EFI label
Since this seems to be a ubiquitous problem for people running ZFS, even
though it''s really a general Solaris admin issue, I''m guessing the
expertise is actually here, so I''m asking here.
I found lots of online pages telling how to do it.
None of them were correct or complete. I think. I seem to have
accomplished it in a somewhat hackish fashion, possibly not