search for: liveupgrade

Displaying 20 results from an estimated 21 matches for "liveupgrade".

2006 Apr 26
1
zfs status with liveupgrade
Hello ,,,, can liveupgrade be done zfs i am running snv_35 on sparc if so how tks art This message posted from opensolaris.org
2006 Apr 26
1
zfs status with liveupgrade
Hello ,,,, can liveupgrade be done zfs i am running snv_35 on sparc if so how tks art This message posted from opensolaris.org
2008 Apr 08
6
lucreate error: Cannot determine the physical boot device ...
# lucreate -n B85 Analyzing system configuration. Hi, after typing # lucreate -n B85 I get the following error: No name for current boot environment. INFORMATION: The current boot environment is not named - assigning name <BE1>. Current boot environment is named <BE1>. Creating initial configuration for primary boot environment <BE1>. ERROR: Unable to determine major and
2009 Mar 31
3
Bad SWAP performance from zvol
I''ve upgraded my system from ufs to zfs (root pool). By default, it creates a zvol for dump and swap. It''s a 4GB Ultra-45 and every late night/morning I run a job which takes around 2GB of memory. With a zvol swap, the system becomes unusable and the Sun Ray client often goes into "26B". So I removed the zvol swap and now I have a standard swap partition. The
2006 Jul 25
1
Some initial experiences.
...g, but not file them against kernel/xen. Could that be added to the list of categories for submission? There''s kind of a composite set of problems with bootadm(1M). While the modified bootadm will update bootadm-created entries, in menu.lst, it *won''t* modify those created by liveupgrade. As an extension of that, the bootadm created entries in menu.lst are not always the *correct* entries to modify in a liveupgrade environment. (there''s also the issue that the ''real'' menu.lst is not always on the currently active BE. I can''t currently work o...
2010 Aug 28
4
ufs root to zfs root liveupgrade?
hi all Try to learn how UFS root to ZFS root liveUG work. I download the vbox image of s10u8, it come up as UFS root. add a new disks (16GB) create zpool rpool run lucreate -n zfsroot -p rpool run luactivate zfsroot run lustatus it do show zfsroot will be active in next boot init 6 but it come up with UFS root, lustatus show ufsroot active zpool rpool is mounted but not used by boot Is this a
2007 Jun 03
4
/dev/random problem after moving to zfs boot:
...ened during the migration to zfs boot. I get an error message about /dev/random: "No randomness provider enabled for /dev/random. Use cryptoadm to provide one." Does anyone know how to fix this? Another thing: Is it possible to upgrade to a higher build when using zfs boot? Is this what LiveUpgrade does? And is there a step by step instruction on how to use LiveUpgrade with zfs boot? This message posted from opensolaris.org
2009 Feb 27
3
luactive question
After a liveupgrade and luactivate I can login to the -new- BE. My question is: do I have to luactive the -old- BE again if I want to chose that one from the grub menu or can I just run it if I want to. -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS sxce snv107 ++ + All that''s rea...
2009 Mar 09
1
Other zvols for swap and dump?
Can you use a different zvol for dump and swap rather than using the swap and dump zvol created by liveupgrade? Casper
2007 Sep 25
2
ZFS speed degraded in S10U4 ?
..., some patches applied, but definitely noone bother to tweak disk subsystem or something else) installation of S10U3 is actually faster than S10U4, and a lot faster. Actually it''s even faster on compressed ZFS with S10U3 than on uncompressed with S10U4. My configuration - default Update 3 LiveUpgraded to Update 4 with ZFS filesystem on dedicated disk, and I''m working with same files which are on same physical cylinders, so it''s not likely a problem with HDD itself. I''m doing as simple as just $time dd if=file.dbf of=/dev/null in few parallel tasks. On Update3 it'...
2009 Jun 05
4
Recover ZFS destroyed dataset?
...'s situation in whole is as follows (although slightly an offtopic from ZFS forum''s subject): 1) They have an OpenSolaris machine with some zones set up, each zone root being a filesystem dataset. Some zones also have delegated datasets for data. 2) The system was to be upgraded with liveupgrade. Apparently something did not go well during lucreate, so the botched ABE was ludelete''d. 3) During ludelete my colleague noticed some messages about inability to destroy some zones'' ZFS pools because they are mounted (luckily, zones were booted), and aborted the ludelete operat...
2006 Nov 07
6
Best Practices recommendation on x4200
Greetings all- I have a new X4200 that I''m getting ready to deploy. It has four 146 GB SAS drives. I''d like to setup the box for maximum redundancy on the data stored on these drives. Unfortunately, it looks like ZFS boot/root aren''t really options at this time. The LSI Logic controller in this box only supports either a RAID0 array with all four disks, or a RAID 1
2019 Jan 02
0
Several problems on Solaris10
...completely ignored from LD_LIBRARY_PATH. > That's rather strange. I too work with solaris ( and for sun in the past ) from sunos 4.1 onwards. I had this previous installation working, with LD_LIBRARY_PATH configured, on solaris 10 u8 since 2014. During this relativelly calm period I did a LiveUpgrade to latest solaris 10 ( u11 ) and the only application that stopped working is dovecot. This mean that ( obviously ) solaris changed something on security. No binary is setuid or setgid in dovecot dir: ls -l /usr/local/dovecot/sbin/dovecot -rwxr-xr-x 1 root root 259452 Dec 29 14:59 /usr/...
2007 May 30
0
Help/Advice needed
I have a Solaris 11 build server with build 58 and a zfs scratch filesystem. When trying to upgrade to build 63 using liveupgrade I get the following upon reboot. The machine never comes up. Just keeps giving the error/warning below. Is there something I am doing wrong? WARNING: /ssm at 0,0/pci at 1c,600000/scsi at 1 (mpt0): Received invalid reply frame address 0x480 WARNING: /ssm at 0,0/pci at 1c,600000/scsi at 1...
2012 Jan 21
2
patching a solaris server with zones on zfs file systems
Hi All, Please let me know the procedure how to patch a server which is having 5 zones on zfs file systems. Root file system exists on internal disk and zones are existed on SAN. Thank you all, Bhanu -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20120121/0672ad27/attachment.html>
2018 Dec 29
4
Several problems on Solaris10
Hi all, I've just upgraded my old Solaris 10 update 8 to Solaris 10 update 11 with the latest patches, but after the reboot with the new update I'm having a lot of problems with dovecot. My version is 2.2.13 ( it was the last one, at the time of the first server setup ). I have seen that ( it seems ) the new solaris don't honour the LD_LIBRARY_PATH. The first error was a
2007 Dec 06
17
HVM Windows: disk image vs. zvol - which is better?
Hi everyone! I''m planning to migrate a little firm''s Windows server to a HVM machine. The test setup woks fine (it''s basically an MSSQL server tied pack of programs for an accountancy office), all programs can reach the mssql from the outside on the hvm windows. If fact it turned out to be so good, that I''m even planning to simply copy the win image(s) to the
2010 Apr 29
39
Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
I''m looking for a way to backup my entire system, the rpool zfs pool to an external HDD so that it can be recovered in full if the internal HDD fails. Previously with Solaris 10 using UFS I would use ufsdump and ufsrestore, which worked so well, I was very confident with it. Now ZFS doesn''t have an exact replacement of this so I need to find a best practice to replace it.
2009 Feb 18
4
Zpool scrub in cron hangs u3/u4 server, stumps tech support.
I''ve got a server that freezes when I run a zpool scrub from cron. Zpool scrub runs fine from the command line, no errors. The freeze happens within 30 seconds of the zpool scrub happening. The one core dump I succeeded in taking showed an arccache eating up all the ram. The server''s running Solaris 10 u3, kernel patch 127727-11 but it''s been patched and seems to have
2008 Nov 12
21
zfs boot - U6 kernel patch breaks sparc boot
Hi, in preparation to try zfs boot on sparc I installed all recent patches incl. feature patches comming from s10s_u3wos_10 and after reboot finally 137137-09 (still having everything on UFS). Now it doesn''t boot at anymore: ############################### Sun Fire V240, No Keyboard Copyright 2006 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.22.23, 2048 MB memory installed,