I currently have b60 kinda installed (there, but still having some issues learning my way around configuration). I am downloading b62 now in order to try to switch to zfs mirrored root. I thought that I should make sure I know what I am doing before I go forward, so here are a few questions... 1. Am I correct in assuming that the current mechanism ( http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ ) still requires that we boot into an installed version of b62 before converting to zfs? 2. Am I correct in assuming that the current mechanism still requires a miniature ufs boot to be on the system containing the grub information on how to get into zfs? 3. Am I correct that root zfs root can still be mirror but not pool? 4. The current instructions appear to be saying that the zfs root is on its own drive (other than the default boot drive) or onto a different slice onto the same drive. If I am understanding that correctly, then during install I would need to make the UFS slice big enough for the install, and leave the other slice unformatted until I go to setup the zfs root. When I create the zfs root, it appears that the instructions are to basically move all the files from the UFS slice onto the ZFS slice..... so: 4a. Does this mean that the UFS slice is going to be basically a large-ish unused slice for the most part? IE: There is no way to reclaim that space for the zfs partition? 4b. Does this mean that if I want the root mirrored, I need to do a UFS mirror for the root slice and a ZFS mirror for the secondary slices? Wow. Ok, thanks for all the help and wish me good luck :) Mal -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070424/55d6ad32/attachment.html>
William D. Hathaway
2007-Apr-24 16:58 UTC
[zfs-discuss] Re: Status Update before Reinstall?
I''ve only used Lori Alt''s patch for b62 boot images via jumpstart (http://www.opensolaris.org/jive/thread.jspa?threadID=28725&tstart=15) which made it an easy process with mirrored boot ZFS drives and no UFS partitions required. If you have a jumpstart server, I think that is the best way to go. -- William Hathaway http://www.williamhathaway.com This message posted from opensolaris.org
On Tue, Apr 24, 2007 at 09:58:54AM -0700, William D. Hathaway wrote:> I''ve only used Lori Alt''s patch for b62 boot images via jumpstart > (http://www.opensolaris.org/jive/thread.jspa?threadID=28725&tstart=15) > which made it an easy process with mirrored boot ZFS drives and no UFS partitions required. If you have a jumpstart server, I think that is the best way to go.Because it would be painful for me to setup a Jumpstart server where I want to install this VM, does anyone have any advice to making the dvd back into a bootable ISO? I''m assuming I want the El Torito -b option, but I''m not sure what file in the dvd filestructure to point it at. Thanks!! -brian -- "Perl can be fast and elegant as much as J2EE can be fast and elegant. In the hands of a skilled artisan, it can and does happen; it''s just that most of the shit out there is built by people who''d be better suited to making sure that my burger is cooked thoroughly." -- Jonathan Patschke
hi brain, in the following blueprint you will find the way to make the dvd bootable. http://www.sun.com/blueprints/0806/819-7546.pdf Step 1: Copy the DVD to Writable Disk The process of copying the DVD to writable disk is relatively straightforward. You need enough space on the target disk to store the copy (~3GB for the Solaris 10 1/06 OS release). You just use cpio or tar to perform the copy. Begin by mounting the DVD, either using vold or by manually mounting it. # mount -F hsfs -o ro /dev/dsk/c1t0d0p0 /mnt # lofiadm -a `pwd`/sol-10-u1-ga-x86-dvd.iso /dev/lofi/1 # mount -F hsfs -o ro /dev/lofi/1 /mnt # mkdir /var/tmp/dvd # cd /mnt # find . -depth -print | cpio -pdm /var/tmp/dvd 6539088 blocks # cd /; umount /mnt ; lofiadm -d /dev/lofi/1 Add the ZFS Root-Install-Patch to the dvd in /var/tmp/dvd! Step 4: Recreate the ISO Image When all customizations are complete in the miniroot and /var/tmp/dvd, you simply run mkisofs to recreate the ISO image. The mkisofs command has the following syntax: # /usr/bin/mkisofs -d -D -J -l -r -U \ -relaxed-filenames \ -b boot/grub/stage2_eltorito \ -no-emul-boot \ -boot-load-size 4 \ -boot-info-table \ -c .catalog \ -V "my_volume_name" \ -o output.iso \ /var/tmp/dvd greets This message posted from opensolaris.org
Malachi de Ælfweald
2007-Apr-24 18:09 UTC
[zfs-discuss] Re: Re: Status Update before Reinstall?
As I don''t have a jumpstart system either, I think this might be the best approach for me as well... I can use the old b60 install to make the modified b62. Thanks, Malachi On 4/24/07, mario heimel <mheimel at web.de> wrote:> > hi brain, > > in the following blueprint you will find the way to make the dvd bootable. > http://www.sun.com/blueprints/0806/819-7546.pdf > > Step 1: Copy the DVD to Writable Disk > The process of copying the DVD to writable disk is relatively > straightforward. You need enough space on the target disk to store the copy > (~3GB for the Solaris 10 1/06 OS release). You just use cpio or tar to > perform the copy. Begin by mounting the DVD, either using vold or by > manually mounting it. > # mount -F hsfs -o ro /dev/dsk/c1t0d0p0 /mnt > # lofiadm -a `pwd`/sol-10-u1-ga-x86-dvd.iso > /dev/lofi/1 > # mount -F hsfs -o ro /dev/lofi/1 /mnt > # mkdir /var/tmp/dvd > # cd /mnt > # find . -depth -print | cpio -pdm /var/tmp/dvd > 6539088 blocks > # cd /; umount /mnt ; lofiadm -d /dev/lofi/1 > > > Add the ZFS Root-Install-Patch to the dvd in /var/tmp/dvd! > > > Step 4: Recreate the ISO Image > When all customizations are complete in the miniroot and /var/tmp/dvd, you > simply run mkisofs to recreate the ISO image. The mkisofs command has the > following syntax: > # /usr/bin/mkisofs -d -D -J -l -r -U \ > -relaxed-filenames \ > -b boot/grub/stage2_eltorito \ > -no-emul-boot \ > -boot-load-size 4 \ > -boot-info-table \ > -c .catalog \ > -V "my_volume_name" \ > -o output.iso \ > /var/tmp/dvd > > greets > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070424/8eb4ec54/attachment.html>
Brian Hechinger
2007-Apr-24 20:35 UTC
[zfs-discuss] Re: Re: Status Update before Reinstall?
On Tue, Apr 24, 2007 at 10:20:23AM -0700, mario heimel wrote:> hi brain,Ok, the solution to the ''bad PBR sig'' issue was to wholesale delete the VM and create a new one fresh. The install has started, we''ll see how it goes. I''ll report here. -brian -- "Perl can be fast and elegant as much as J2EE can be fast and elegant. In the hands of a skilled artisan, it can and does happen; it''s just that most of the shit out there is built by people who''d be better suited to making sure that my burger is cooked thoroughly." -- Jonathan Patschke
I just installed a mirrored root system last night, but using Tim Foster''s zfs-actual-root-install.sh script on a clean install of b62 (http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling). You mention that no UFS slices are necessary using the patched DVD netinstall - is a dump slice still needed? cheers, -o This message posted from opensolaris.org
On Tue, Apr 24, 2007 at 04:51:10PM -0700, oliver soell wrote:> I just installed a mirrored root system last night, but using Tim Foster''s zfs-actual-root-install.sh script on a clean install of b62 (http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling). > > You mention that no UFS slices are necessary using the patched DVD netinstall - is a dump slice still needed?Yes, dump on ZVOL isn''t currently supported, so a dump slice is still needed. -brian -- "Perl can be fast and elegant as much as J2EE can be fast and elegant. In the hands of a skilled artisan, it can and does happen; it''s just that most of the shit out there is built by people who''d be better suited to making sure that my burger is cooked thoroughly." -- Jonathan Patschke
Sorry I didn''t get respond to this earlier. Due to a really dumb mistake on my part (long embarrassing story omitted), I lost access to my mail for two days. Answers below: Malachi de ?lfweald wrote:> I currently have b60 kinda installed (there, but still having some > issues learning my way around configuration). I am downloading b62 > now in order to try to switch to zfs mirrored root. I thought that I > should make sure I know what I am doing before I go forward, so here > are a few questions... > > 1. Am I correct in assuming that the current mechanism ( > http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ ) > still requires that we boot into an installed version of b62 before > converting to zfs?As William Hathaway pointed out later, you can use my netinstall/dvdinstall update kit: http://www.opensolaris.org/os/community/zfs/boot/netinstall to do the install. Frankly, I really don''t know all that much about the manual procedure because I haven''t used it for months. I''ve been doing netinstalls and dvd installs instead.> > 2. Am I correct in assuming that the current mechanism still requires > a miniature ufs boot to be on the system containing the grub > information on how to get into zfsno, that ''s gone. We can boot zfs directly now.> > 3. Am I correct that root zfs root can still be mirror but not pool?Terminology: root pool - a zfs storage pool that contains a bootable dataset, identified by the value of the "bootfs" pool property bootable dataset - a dataset that contains a root file system A root pool can be mirrored, but not striped or RAID-Z. (We hope to support striping and RAID-Z eventually).> > 4. The current instructions appear to be saying that the zfs root is > on its own drive (other than the default boot drive) or onto a > different slice onto the same drive. If I am understanding that > correctly, then during install I would need to make the UFS slice big > enough for the install, and leave the other slice unformatted until I > go to setup the zfs root. When I create the zfs root, it appears that > the instructions are to basically move all the files from the UFS > slice onto the ZFS slice..... so:no ufs slice needed.> > 4a. Does this mean that the UFS slice is going to be basically a > large-ish unused slice for the most part? IE: There is no way to > reclaim that space for the zfs partition? > > 4b. Does this mean that if I want the root mirrored, I need to do a > UFS mirror for the root slice and a ZFS mirror for the secondary slices?> > > Wow. Ok, thanks for all the help and wish me good luck :)good luck! lori
oliver soell wrote:> I just installed a mirrored root system last night, but using Tim Foster''s zfs-actual-root-install.sh script on a clean install of b62 (http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling). > > You mention that no UFS slices are necessary using the patched DVD netinstall - is a dump slice still needed? >It looks to me, from looking at Tim''s script, like you end up with the same swap/dump slice that you had when you initially installed the system with a ufs root (the script doesn''t appear to be setting up a swap zvol). So if you used the manual procedure, with or without Tim''s script, it looks like you don''t need to create a dump slice. Though in that case, you''re not exercising the zvol as a swap device. If you use my netinstall kit: http://www.opensolaris.org/os/community/zfs/boot/netinstall and set up your profile as recommended in the README, you''ll get a zvol as swap and a slice for dump (since we aren''t able to dump into a zvol yet.) Lori
On Wed, 2007-04-25 at 14:45 -0600, Lori Alt wrote:> oliver soell wrote: > > I just installed a mirrored root system last night, but using Tim Foster''s zfs-actual-root-install.sh script on a clean install of b62 (http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling). > > > > You mention that no UFS slices are necessary using the patched DVD netinstall - is a dump slice still needed? > > > > It looks to me, from looking at Tim''s script, like you end up with > the same swap/dump slice that you had when you initially installed > the system with a ufs root (the script doesn''t appear to be > setting up a swap zvol). So if you used the manual procedure, > with or without Tim''s script, it looks like you don''t need to > create a dump slice. Though in that case, you''re not exercising > the zvol as a swap device. > > If you use my netinstall kit: > > http://www.opensolaris.org/os/community/zfs/boot/netinstall > > and set up your profile as recommended in the README, > you''ll get a zvol as swap and a slice for dump (since we aren''t > able to dump into a zvol yet.) >Will we need to use this kit for further builds or will it be updated for new builds as they arrive?> Lori > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Thanks... Mike Dotson Area System Support Engineer - ACS West Phone: (503) 343-5157 Mike.Dotson at Sun.Com
Mike Dotson wrote:> On Wed, 2007-04-25 at 14:45 -0600, Lori Alt wrote: > >> oliver soell wrote: >> >>> I just installed a mirrored root system last night, but using Tim Foster''s zfs-actual-root-install.sh script on a clean install of b62 (http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling). >>> >>> You mention that no UFS slices are necessary using the patched DVD netinstall - is a dump slice still needed? >>> >>> >> It looks to me, from looking at Tim''s script, like you end up with >> the same swap/dump slice that you had when you initially installed >> the system with a ufs root (the script doesn''t appear to be >> setting up a swap zvol). So if you used the manual procedure, >> with or without Tim''s script, it looks like you don''t need to >> create a dump slice. Though in that case, you''re not exercising >> the zvol as a swap device. >> >> If you use my netinstall kit: >> >> http://www.opensolaris.org/os/community/zfs/boot/netinstall >> >> and set up your profile as recommended in the README, >> you''ll get a zvol as swap and a slice for dump (since we aren''t >> able to dump into a zvol yet.) >> >> > > Will we need to use this kit for further builds or will it be updated > for new builds as they arrive? >The same kit should work for future builds, until further notice. Lori
> Yes, dump on ZVOL isn''t currently supported, so a dump slice is still needed.Maybe a dumb question, but why would anyone ever want to dump to an actual filesystem? (Or is my head thinking too Solaris) Actually I could see why, but I don''t think it is a good idea. -brian
Malachi de Ælfweald
2007-Apr-26 00:42 UTC
[zfs-discuss] Re: Status Update before Reinstall?
Maybe so that it can grow rather than being tied to a specific piece of hardware? Malachi On 4/25/07, Brian Gupta <brian.gupta at gmail.com> wrote:> > > Yes, dump on ZVOL isn''t currently supported, so a dump slice is still > needed. > > Maybe a dumb question, but why would anyone ever want to dump to an > actual filesystem? (Or is my head thinking too Solaris) > > Actually I could see why, but I don''t think it is a good idea. > > -brian > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070425/024cb9ba/attachment.html>
On 4/25/07, Brian Gupta <brian.gupta at gmail.com> wrote:> >> Yes, dump on ZVOL isn''t currently supported, so a dump slice is still >needed. > >Maybe a dumb question, but why would anyone ever want to dump to an >actual filesystem? (Or is my head thinking too Solaris) > >Actually I could see why, but I don''t think it is a good idea.When we talk about dump, we are talking about a crashdump, which is what the solaris kernel does on panic. It dumps to the swap space. Then, at next boot, it takes that crashdump and puts it in /var/crash/<hostname> as part of the savecore process. Only really useful if you plan to submit a ticket to sun under a support contract or know how to read a crashdump file. I''ve gotten the SPARC assembly and Panic! books, but never did finish them, so a crashdump is mostly useless to me as a home Solaris user. :) -brian -- "Perl can be fast and elegant as much as J2EE can be fast and elegant. In the hands of a skilled artisan, it can and does happen; it''s just that most of the shit out there is built by people who''d be better suited to making sure that my burger is cooked thoroughly." -- Jonathan Patschke
Please bear with me, as I am not very familiar with ZFS. (And unfortunately probably won''t have time to be until ZFS supports root boot and clustering in a named release). I do understand the reasons why you would want to dump to a virtual construct. I am just not very comfortable with the concept. My instinct is that you want the fewest layers of software involved in the event of a system crashdump. To me dumping to logical volumes or filesystems seems like asking for trouble. Now on the other hand, if you were to dump to an underlying "zdev" it starts to make sense. (Assuming a zdev is basically a physical "chunk" of a LUN or disk. Please educate me as to what I am missing. Thanks, Brian On 4/25/07, Malachi de ?lfweald <malachid at gmail.com> wrote:> Maybe so that it can grow rather than being tied to a specific piece of > hardware? > > Malachi > > > On 4/25/07, Brian Gupta < brian.gupta at gmail.com> wrote: > > > > > Yes, dump on ZVOL isn''t currently supported, so a dump slice is still > needed. > > > > Maybe a dumb question, but why would anyone ever want to dump to an > > actual filesystem? (Or is my head thinking too Solaris) > > > > Actually I could see why, but I don''t think it is a good idea. > > > > -brian > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >
On Wed, Apr 25, 2007 at 09:05:09PM -0400, Brian Gupta wrote:> > I do understand the reasons why you would want to dump to a virtual > construct. I am just not very comfortable with the concept. > > My instinct is that you want the fewest layers of software involved in > the event of a system crashdump.Hmmm, that''s a good point. If ZFS is what caused the panic, you aren''t guaranteed to get a crashdump to be able to diagnose what went wrong. Maybe raw dump devices aren''t such a bad idea after all. ;) On that note, does anyone know what the rule of thumb on dump size is? RAM size? RAM+swap? -brian -- "Perl can be fast and elegant as much as J2EE can be fast and elegant. In the hands of a skilled artisan, it can and does happen; it''s just that most of the shit out there is built by people who''d be better suited to making sure that my burger is cooked thoroughly." -- Jonathan Patschke
If I recall, the dump partition needed to be at least as large as RAM. In Solaris 8(?) this changed, in that crashdumps streans were compressed as they were written out to disk. Although I''ve never read this anywhere, I assumed the reasons this was done are as follows: 1) Large enterprise systems could support ridiculous (at the time) amounts of physical RAM. Providing a physical disk/LUN partition that could hold such a large crashdump seemed wasteful and expensive. 2) Compressing the dump before writing to disk would be faster, thus improving the chances of getting a full dump. (CPU performance has progressed at a much higher rate of change than disk throughputs have). (I don''t know what the compression ratios are, but I''d imagine they would be pretty high). Cheers, -brian On 4/25/07, Brian Hechinger <wonko at 4amlunch.net> wrote:> On Wed, Apr 25, 2007 at 09:05:09PM -0400, Brian Gupta wrote: > > > > I do understand the reasons why you would want to dump to a virtual > > construct. I am just not very comfortable with the concept. > > > > My instinct is that you want the fewest layers of software involved in > > the event of a system crashdump. > > Hmmm, that''s a good point. If ZFS is what caused the panic, you aren''t > guaranteed to get a crashdump to be able to diagnose what went wrong. > > Maybe raw dump devices aren''t such a bad idea after all. ;) > > On that note, does anyone know what the rule of thumb on dump size is? > RAM size? RAM+swap? > > -brian > -- > "Perl can be fast and elegant as much as J2EE can be fast and elegant. > In the hands of a skilled artisan, it can and does happen; it''s just > that most of the shit out there is built by people who''d be better > suited to making sure that my burger is cooked thoroughly." -- Jonathan Patschke > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Brian Gupta wrote:>> Yes, dump on ZVOL isn''t currently supported, so a dump slice is still >> needed. > > Maybe a dumb question, but why would anyone ever want to dump to an > actual filesystem? (Or is my head thinking too Solaris)IMHO, only a few people in the world care about dumps at all (and you know who you are :-). If you care, setup dump to an NFS server somewhere, no need to have it local. -- richard
Hello Brian, Thursday, April 26, 2007, 3:55:16 AM, you wrote: BG> If I recall, the dump partition needed to be at least as large as RAM. BG> In Solaris 8(?) this changed, in that crashdumps streans were BG> compressed as they were written out to disk. Although I''ve never read BG> this anywhere, I assumed the reasons this was done are as follows: BG> 1) Large enterprise systems could support ridiculous (at the time) BG> amounts of physical RAM. Providing a physical disk/LUN partition that BG> could hold such a large crashdump seemed wasteful and expensive. BG> 2) Compressing the dump before writing to disk would be faster, thus BG> improving the chances of getting a full dump. (CPU performance has BG> progressed at a much higher rate of change than disk throughputs BG> have). BG> (I don''t know what the compression ratios are, but I''d imagine they BG> would be pretty high). By default only kernel pages are saved to dump device so even without compression it can be smaller than ram size in a server. I often see compression ratio 1.x or 2.x nothing more (it''s lzjb after all). Now with ZFS the story is a little bit different as its caches are treated as kernel pages so you basically are dumping all memory in case of file servers... there''s an open bug for it. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Robert Milkowski writes: > Hello Brian, > > Thursday, April 26, 2007, 3:55:16 AM, you wrote: > > BG> If I recall, the dump partition needed to be at least as large as RAM. > > BG> In Solaris 8(?) this changed, in that crashdumps streans were > BG> compressed as they were written out to disk. Although I''ve never read > BG> this anywhere, I assumed the reasons this was done are as follows: > > BG> 1) Large enterprise systems could support ridiculous (at the time) > BG> amounts of physical RAM. Providing a physical disk/LUN partition that > BG> could hold such a large crashdump seemed wasteful and expensive. > > BG> 2) Compressing the dump before writing to disk would be faster, thus > BG> improving the chances of getting a full dump. (CPU performance has > BG> progressed at a much higher rate of change than disk throughputs > BG> have). > > BG> (I don''t know what the compression ratios are, but I''d imagine they > BG> would be pretty high). > > By default only kernel pages are saved to dump device so even without > compression it can be smaller than ram size in a server. I often see > compression ratio 1.x or 2.x nothing more (it''s lzjb after all). > > Now with ZFS the story is a little bit different as its caches are > treated as kernel pages so you basically are dumping all memory in > case of file servers... there''s an open bug for it. > Correction, it''s now Fix Delivered build snv_56. 4894692 caching data in heap inflates crash dump -r > -- > Best regards, > Robert mailto:rmilkowski at task.gda.pl > http://milek.blogspot.com > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello Roch, Thursday, April 26, 2007, 12:33:00 PM, you wrote: RP> Robert Milkowski writes: >> Hello Brian, >> >> Thursday, April 26, 2007, 3:55:16 AM, you wrote: >> >> BG> If I recall, the dump partition needed to be at least as large as RAM. >> >> BG> In Solaris 8(?) this changed, in that crashdumps streans were >> BG> compressed as they were written out to disk. Although I''ve never read >> BG> this anywhere, I assumed the reasons this was done are as follows: >> >> BG> 1) Large enterprise systems could support ridiculous (at the time) >> BG> amounts of physical RAM. Providing a physical disk/LUN partition that >> BG> could hold such a large crashdump seemed wasteful and expensive. >> >> BG> 2) Compressing the dump before writing to disk would be faster, thus >> BG> improving the chances of getting a full dump. (CPU performance has >> BG> progressed at a much higher rate of change than disk throughputs >> BG> have). >> >> BG> (I don''t know what the compression ratios are, but I''d imagine they >> BG> would be pretty high). >> >> By default only kernel pages are saved to dump device so even without >> compression it can be smaller than ram size in a server. I often see >> compression ratio 1.x or 2.x nothing more (it''s lzjb after all). >> >> Now with ZFS the story is a little bit different as its caches are >> treated as kernel pages so you basically are dumping all memory in >> case of file servers... there''s an open bug for it. >> RP> Correction, it''s now Fix Delivered build snv_56. RP> 4894692 caching data in heap inflates crash dump Good to know. I hope it will make it into U4. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
On Wed, Apr 25, 2007 at 09:55:16PM -0400, Brian Gupta wrote:> > In Solaris 8(?) this changed, in that crashdumps streans were > compressed as they were written out to disk. Although I''ve never read > this anywhere, I assumed the reasons this was done are as follows:What happens if the dump slice is too small? Just the dump just fail? I mostly don''t care about dumps, so...... ;) -brian -- "Perl can be fast and elegant as much as J2EE can be fast and elegant. In the hands of a skilled artisan, it can and does happen; it''s just that most of the shit out there is built by people who''d be better suited to making sure that my burger is cooked thoroughly." -- Jonathan Patschke
On Wed, Apr 25, 2007 at 09:30:12PM -0700, Richard Elling wrote:> > IMHO, only a few people in the world care about dumps at all (and you > know who you are :-). If you care, setup dump to an NFS server somewhere, > no need to have it local.a) what does this entail b) with zvols not supporting dump, what would happen if one were to setup a machine with no dump slice at all, would it just skip the dump completely? -brian -- "Perl can be fast and elegant as much as J2EE can be fast and elegant. In the hands of a skilled artisan, it can and does happen; it''s just that most of the shit out there is built by people who''d be better suited to making sure that my burger is cooked thoroughly." -- Jonathan Patschke
> > RP> Correction, it''s now Fix Delivered build snv_56. > > RP> 4894692 caching data in heap inflates crash dump > > Good to know. > I hope it will make it into U4.Yep, it will. You know its kinda silly we don''t expose that info to the public via: http://bugs.opensolaris.org/view_bug.do?bug_id=4894692 We can see the backporting info in our bug tracking. Might want to send a note to the opensolaris-discuss list... eric
So first of all, we''re not proposing dumping to a filesystem. We''re proposing dumping to a zvol, which is a raw volume implemented within a pool (see the -V option to the zfs create command). As Malachi points out, the advantage of this is that it simplifies the ongoing administration. You don''t have to have pre-allocate a slice of the appropriate size, and then be unable to grow the space later. You are right that at crash dump time, you want as little complexity as possible in the process of writing out the dump because there''s no knowing how broken the system is. So consider what happens with dump files and UFS. With UFS, you can set up a file as a dump device. This is not as crazy as it sounds because at the time you set up the dump device (through dumpadm), UFS allocates the space and sets up an array of offset-length pointers to the space, so that at the time the crash dump takes place, some really dumb code in the kernel just has to run that list and hose the memory contents into those pre-allocated areas on the disk. We are looking at doing something similar with zfs, where the space is allocated and pointers to it prepared in advance, so that at crash time, we only need very simple code to write out the dump. I''m not in charge of the zfs dump development, so I don''t know the technical details, but I think that the development is proceeding along these lines. Lori Brian Gupta wrote:> Please bear with me, as I am not very familiar with ZFS. (And > unfortunately probably won''t have time to be until ZFS supports root > boot and clustering in a named release). > > I do understand the reasons why you would want to dump to a virtual > construct. I am just not very comfortable with the concept. > > My instinct is that you want the fewest layers of software involved in > the event of a system crashdump. > > To me dumping to logical volumes or filesystems seems like asking for > trouble. Now on the other hand, if you were to dump to an underlying > "zdev" it starts to make sense. (Assuming a zdev is basically a > physical "chunk" of a LUN or disk. > > Please educate me as to what I am missing. > > Thanks, > Brian > > On 4/25/07, Malachi de ?lfweald <malachid at gmail.com> wrote: >> Maybe so that it can grow rather than being tied to a specific piece of >> hardware? >> >> Malachi >> >> >> On 4/25/07, Brian Gupta < brian.gupta at gmail.com> wrote: >> > >> > > Yes, dump on ZVOL isn''t currently supported, so a dump slice is >> still >> needed. >> > >> > Maybe a dumb question, but why would anyone ever want to dump to an >> > actual filesystem? (Or is my head thinking too Solaris) >> > >> > Actually I could see why, but I don''t think it is a good idea. >> > >> > -brian >> > _______________________________________________ >> > zfs-discuss mailing list >> > zfs-discuss at opensolaris.org >> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > >> >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, 2007-04-25 at 21:30 -0700, Richard Elling wrote:> Brian Gupta wrote: > > Maybe a dumb question, but why would anyone ever want to dump to an > > actual filesystem? (Or is my head thinking too Solaris) > > IMHO, only a few people in the world care about dumps at all (and you > know who you are :-).sorry, but that''s a attitude which is toxic to quality. EVERY installed solaris system should be able to generate and save a valid crash dump, to increase the chance that a bug will be able to be root caused the first time some customer sees it. - Bill
On Wed, Apr 25, 2007 at 09:30:12PM -0700, Richard Elling wrote:> IMHO, only a few people in the world care about dumps at all (and you > know who you are :-). If you care, setup dump to an NFS server somewhere, > no need to have it local.Well IMHO, every Solaris customer cares about crash dumps (although they may not know it). There are failures that occur once -- no dump means no solution. And you''re not going to be dumping directly over NFS if you care about your crash dump (see previous point). Adam -- Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
Adam Leventhal wrote:> On Wed, Apr 25, 2007 at 09:30:12PM -0700, Richard Elling wrote: >> IMHO, only a few people in the world care about dumps at all (and you >> know who you are :-). If you care, setup dump to an NFS server somewhere, >> no need to have it local. > > Well IMHO, every Solaris customer cares about crash dumps (although they > may not know it). There are failures that occur once -- no dump means no > solution. > > And you''re not going to be dumping directly over NFS if you care about > your crash dump (see previous point).Here Here. Way back about 10 years ago when I first joined SunService one of the really embarassing things we often had to get customers to do was manually edit an init.d script to enable savecore on reboot. It is critical for all of us that we increase the chances of getting the kernel dumps the first time. This IMO is actually even more important now that we have things that self heal like ZFS and SMF and FMA working together as well because it means these cases are often even more serious. IMO the same actually applies to userland core files too and I''m slowly writing up a proposal to do something better than "drop a file named core in $CWD" - but please everyone don''t lets discuss that here on ZFS''s discussion alias it is not really on topic. -- Darren J Moffat
Malachi de Ælfweald
2007-Apr-26 21:20 UTC
[zfs-discuss] Re: Status Update before Reinstall?
Just an interesting side note.... networked based logging isn''t always a bad thing. I''ll give you an example. My Netgear router will crash within 1/2 hour if I turn local logging on. However, it has no problems sending the logs via syslog to another machine. Just a thought. Mal On 4/26/07, Adam Leventhal <ahl at eng.sun.com> wrote:> > On Wed, Apr 25, 2007 at 09:30:12PM -0700, Richard Elling wrote: > > IMHO, only a few people in the world care about dumps at all (and you > > know who you are :-). If you care, setup dump to an NFS server > somewhere, > > no need to have it local. > > Well IMHO, every Solaris customer cares about crash dumps (although they > may not know it). There are failures that occur once -- no dump means no > solution. > > And you''re not going to be dumping directly over NFS if you care about > your crash dump (see previous point). > > Adam > > -- > Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070426/63238a18/attachment.html>
Of course. But in the case of syslog you write it to local disk and send it to your central syslog server. Speaking of syslog, where is the appropriate community to discuss syslog-ng? Thanks, Brian On 4/26/07, Malachi de ?lfweald <malachid at gmail.com> wrote:> Just an interesting side note.... networked based logging isn''t always a bad > thing. I''ll give you an example. My Netgear router will crash within 1/2 > hour if I turn local logging on. However, it has no problems sending the > logs via syslog to another machine. > > Just a thought. > > Mal > > > On 4/26/07, Adam Leventhal <ahl at eng.sun.com> wrote: > > > > On Wed, Apr 25, 2007 at 09:30:12PM -0700, Richard Elling wrote: > > > IMHO, only a few people in the world care about dumps at all (and you > > > know who you are :-). If you care, setup dump to an NFS server > somewhere, > > > no need to have it local. > > > > Well IMHO, every Solaris customer cares about crash dumps (although they > > may not know it). There are failures that occur once -- no dump means no > > solution. > > > > And you''re not going to be dumping directly over NFS if you care about > > your crash dump (see previous point). > > > > Adam > > > > -- > > Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >