hello, all I am have constraint disk space (only 8GB) while running os inside vm. Now i want to add more. It is easy to add for vm but how can i update fs in os? I cannot use autoexpand because it doesn''t implemented in my system: $ uname -a SunOS sopen 5.11 snv_111b i86pc i386 i86pc If it was 171 it would be grate, right? Doing following: o added new virtual HDD (it becomes /dev/rdsk/c7d1s0) o run format, write label # zpool status pool: rpool state: ONLINE scrub: scrub completed after 0h10m with 0 errors on Fri May 28 16:47:05 2010 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c7d0s0 ONLINE 0 0 0 errors: No known data errors # zpool add rpool c7d1 cannot label ''c7d1'': EFI labeled devices are not supported on root pools. # prtvtoc /dev/rdsk/c7d0s0 | fmthard -s - /dev/rdsk/c7d1s0 fmthard: New volume table of contents now in place. # zpool add rpool c7d1s0 invalid vdev specification use ''-f'' to override the following errors: /dev/dsk/c7d1s0 overlaps with /dev/dsk/c7d1s2 # zpool add -f rpool c7d1s0 cannot add to ''rpool'': root pool can not have multiple vdevs or separate logs o omg, i tried all the magic command that i found at internet and in tfm. now writing to maillist :-). Help! -- Dmitry -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100528/b8a5d5e6/attachment.html>
Cindy Swearingen
2010-May-28 20:44 UTC
[zfs-discuss] expand zfs for OpenSolaris running inside vm
Hi-- I can''t speak to running a ZFS root pool in a VM, but the problem is that you can''t add another disk to a root pool. All the boot info needs to be contiguous. This is a boot limitation. I''ve not attempted either of these operations in a VM but you might consider: 1. Replacing the root pool disk with a larger disk 2. Attaching a larger disk to the root pool and then detaching the smaller disk I like #2 best. See this section in the ZFS troubleshooting wiki: http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Replacing/Relabeling the Root Pool Disk Thanks, Cindy On 05/28/10 12:54, me wrote:> hello, all > > I am have constraint disk space (only 8GB) while running os inside vm. > Now i want to add more. It is easy to add for vm but how can i update fs > in os? > > I cannot use autoexpand because it doesn''t implemented in my system: > $ uname -a > SunOS sopen 5.11 snv_111b i86pc i386 i86pc > If it was 171 it would be grate, right? > > Doing following: > > o added new virtual HDD (it becomes /dev/rdsk/c7d1s0) > o run format, write label > > # zpool status > pool: rpool > state: ONLINE > scrub: scrub completed after 0h10m with 0 errors on Fri May 28 16:47:05 > 2010 > config: > > NAME STATE READ WRITE CKSUM > rpool ONLINE 0 0 0 > c7d0s0 ONLINE 0 0 0 > > errors: No known data errors > > # zpool add rpool c7d1 > cannot label ''c7d1'': EFI labeled devices are not supported on root pools. > > # prtvtoc /dev/rdsk/c7d0s0 | fmthard -s - /dev/rdsk/c7d1s0 > fmthard: New volume table of contents now in place. > > # zpool add rpool c7d1s0 > invalid vdev specification > use ''-f'' to override the following errors: > /dev/dsk/c7d1s0 overlaps with /dev/dsk/c7d1s2 > > # zpool add -f rpool c7d1s0 > cannot add to ''rpool'': root pool can not have multiple vdevs or separate > logs > > o omg, i tried all the magic command that i found at internet and in > tfm. now writing to maillist :-). Help! > > -- > Dmitry > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks! It is exactly i was looking for. On Sat, May 29, 2010 at 12:44 AM, Cindy Swearingen < cindy.swearingen at oracle.com> wrote:> 2. Attaching a larger disk to the root pool and then detaching > the smaller disk > > I like #2 best. See this section in the ZFS troubleshooting wiki: > > http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide > > Replacing/Relabeling the Root Pool Disk > >Size of pool is changed, i updated swap size too. Now i have detached old disk. I did installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0 Reboot and fail to startup. Grub is loads, os loading screen shows and then restart :(. I loaded rescue disc console but don''t know what to do. -- Dmitry -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100530/21205c5c/attachment.html>
me
2010-May-30 17:28 UTC
[zfs-discuss] [RESOLVED] Re: expand zfs for OpenSolaris running inside vm
Reinstalling grub helped. What is the purpose of dump slice? On Sun, May 30, 2010 at 9:05 PM, me <deascx at gmail.com> wrote:> Thanks! It is exactly i was looking for. > > > On Sat, May 29, 2010 at 12:44 AM, Cindy Swearingen < > cindy.swearingen at oracle.com> wrote: > >> 2. Attaching a larger disk to the root pool and then detaching >> the smaller disk >> >> I like #2 best. See this section in the ZFS troubleshooting wiki: >> >> http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide >> >> Replacing/Relabeling the Root Pool Disk >> >> > Size of pool is changed, i updated swap size too. Now i have detached old > disk. > I did > > installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0 > > > Reboot and fail to startup. Grub is loads, os loading screen shows and then > restart :(. I loaded rescue disc console but don''t know what to do. > > -- > Dmitry >-- Dmitry -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100530/b7648670/attachment.html>
Fred Liu
2010-May-31 23:20 UTC
[zfs-discuss] Can root pool slice co-exist with non-root pool slice in one HDD?
Hi, The subject says it all. Thanks. Fred -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100531/d271df15/attachment.html>
Richard Elling
2010-May-31 23:42 UTC
[zfs-discuss] Can root pool slice co-exist with non-root pool slice in one HDD?
On May 31, 2010, at 4:20 PM, Fred Liu wrote:> Hi, > > The subject says it all.Yes. The reply says it all. :-) Making it happen, is a feature of the installer(s). -- richard -- ZFS and NexentaStor training, Rotterdam, July 13-15, 2010 http://nexenta-rotterdam.eventbrite.com/
Thanks. But it seems not the truth. I just recalled a thread in this list and it said SMI lable and EFI label cannot be in one disk. Is it correct? Let me describe my case. I have a 160GB HDD -- saying c0t0d0. I use OpenSolaris installer to cut a 100GB slice -- c0t0d0s0 for rpool. And I want to use the remaining space for cache device -- assuming c0t0d0s1. But when I use format command, I cannot see the remaining space. artition> p Current partition table (original): Total disk cylinders available: 13048 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 1 - 13047 99.95GB (13047/0/0) 209600055 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 13047 99.95GB (13048/0/0) 209616120 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 9 unassigned wm 0 0 (0/0/0) 0 Anyone like me has the similar case? Thanks. Fred -----Original Message----- From: Richard Elling [mailto:richard.elling at gmail.com] Sent: ???, ?? 01, 2010 7:42 To: Fred Liu Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] Can root pool slice co-exist with non-root pool slice in one HDD? On May 31, 2010, at 4:20 PM, Fred Liu wrote:> Hi, > > The subject says it all.Yes. The reply says it all. :-) Making it happen, is a feature of the installer(s). -- richard -- ZFS and NexentaStor training, Rotterdam, July 13-15, 2010 http://nexenta-rotterdam.eventbrite.com/
On Jun 1, 2010, at 4:35 AM, Fred Liu wrote:> Thanks. > > But it seems not the truth. > I just recalled a thread in this list and it said SMI lable and EFI label cannot be in one disk. > Is it correct? > > Let me describe my case. > I have a 160GB HDD -- saying c0t0d0. I use OpenSolaris installer to cut a 100GB slice -- c0t0d0s0 for rpool. > And I want to use the remaining space for cache device -- assuming c0t0d0s1. But when I use format command, > I cannot see the remaining space. > > artition> p > Current partition table (original): > Total disk cylinders available: 13048 + 2 (reserved cylinders) > > Part Tag Flag Cylinders Size Blocks > 0 root wm 1 - 13047 99.95GB (13047/0/0) 209600055 > 1 unassigned wm 0 0 (0/0/0) 0 > 2 backup wu 0 - 13047 99.95GB (13048/0/0) 209616120 > 3 unassigned wm 0 0 (0/0/0) 0 > 4 unassigned wm 0 0 (0/0/0) 0 > 5 unassigned wm 0 0 (0/0/0) 0 > 6 unassigned wm 0 0 (0/0/0) 0 > 7 unassigned wm 0 0 (0/0/0) 0 > 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 > 9 unassigned wm 0 0 (0/0/0) 0 > > Anyone like me has the similar case?That is an fdisk partition, not a slice. As I noted, not all of the installers have the flexibility you desire. -- richard> > Thanks. > > Fred > > -----Original Message----- > From: Richard Elling [mailto:richard.elling at gmail.com] > Sent: ???, ?? 01, 2010 7:42 > To: Fred Liu > Cc: zfs-discuss at opensolaris.org > Subject: Re: [zfs-discuss] Can root pool slice co-exist with non-root pool slice in one HDD? > > On May 31, 2010, at 4:20 PM, Fred Liu wrote: > >> Hi, >> >> The subject says it all. > > Yes. > > The reply says it all. :-) > > Making it happen, is a feature of the installer(s). > -- richard > > -- > ZFS and NexentaStor training, Rotterdam, July 13-15, 2010 > http://nexenta-rotterdam.eventbrite.com/ > > > > > > >-- ZFS and NexentaStor training, Rotterdam, July 13-15, 2010 http://nexenta-rotterdam.eventbrite.com/
Cindy Swearingen
2010-Jun-01 16:07 UTC
[zfs-discuss] [RESOLVED] Re: expand zfs for OpenSolaris running inside vm
Hi-- The purpose of the ZFS dump volume is to provide space for a system crash dump. You can choose not to have one, I suppose, but you wouldn''t be able to collect valuable system info. Thanks, Cindy On 05/30/10 11:28, me wrote:> Reinstalling grub helped. > > What is the purpose of dump slice? > > On Sun, May 30, 2010 at 9:05 PM, me <deascx at gmail.com > <mailto:deascx at gmail.com>> wrote: > > Thanks! It is exactly i was looking for. > > > On Sat, May 29, 2010 at 12:44 AM, Cindy Swearingen > <cindy.swearingen at oracle.com <mailto:cindy.swearingen at oracle.com>> > wrote: > > 2. Attaching a larger disk to the root pool and then detaching > the smaller disk > > I like #2 best. See this section in the ZFS troubleshooting wiki: > > http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide > > Replacing/Relabeling the Root Pool Disk > > > Size of pool is changed, i updated swap size too. Now i have > detached old disk. > I did > > installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0 > > > Reboot and fail to startup. Grub is loads, os loading screen shows > and then restart :(. I loaded rescue disc console but don''t know > what to do. > > -- > Dmitry > > > -- > Dmitry > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 6/1/10 4:35 AM -0700 Fred Liu wrote:> I just recalled a thread in this list and it said SMI lable and EFI label > cannot be in one disk. Is it correct?Correct. But that was not your original question.> Let me describe my case. > I have a 160GB HDD -- saying c0t0d0. I use OpenSolaris installer to cut a > 100GB slice -- c0t0d0s0 for rpool. And I want to use the remaining space > for cache device -- assuming c0t0d0s1. But when I use format command, I > cannot see the remaining space.You probably created the SMI label within a partition that doesn''t include the entire disk. I guess as Richard says this is a limitation of the installer. -frank
-----Original Message----- From: Frank Cusack [mailto:frank+lists/zfs at linetwo.net] Sent: Wednesday, June 02, 2010 2:38 AM To: Fred Liu Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] SMI lable and EFI label in one disk? On 6/1/10 4:35 AM -0700 Fred Liu wrote:> I just recalled a thread in this list and it said SMI lable and EFI label > cannot be in one disk. Is it correct?Correct. But that was not your original question. See. Ok. Let me expand question. In what kind of situation will properly arise this need? Or it is just an assumption?> Let me describe my case. > I have a 160GB HDD -- saying c0t0d0. I use OpenSolaris installer to cut a > 100GB slice -- c0t0d0s0 for rpool. And I want to use the remaining space > for cache device -- assuming c0t0d0s1. But when I use format command, I > cannot see the remaining space.You probably created the SMI label within a partition that doesn''t include the entire disk. I guess as Richard says this is a limitation of the installer. That is true. Any internals about this limitation? How can I realize my goal? Thanks. Fred -frank
On Tue, Jun 1, 2010 at 11:54 AM, Fred Liu <Fred_Liu at issi.com> wrote:> That is true. Any internals about this limitation? How can I realize my goal?You can''t do it using the Caiman installer that comes with the osol dev builds. There are a few ways that you can do it now that the system is installed. If you have a second drive, you can do a bit of a monte to make it work. You basically create a mirror of your boot disk onto another drive, remove the original drive and repartition, add it back as a mirror, then detach the second mirror. You can do it all while the system is up without rebooting. On the second drive (call is c0t1d0): - Run format -e, you''ll need to be able to specify the label type. - Create an fdisk partition of type SOLARIS2 that uses the entire disk - Create a 100g slice in c0t1d0s0. Label the disk as SMI. Make sure that this is the same size as your existing c0t0d0s0. If it''s too small, it won''t work, and if it''s too large it''ll grow the pool size. # zpool attach rpool c0t0d0s0 c0t1d0s0 ... wait for resilver ... # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t1d0s0 # bootadm update-archive # zpool detach rpool c0t0d0s0 (installgrub and bootadm here are just in case there''s a problem and you have to reboot.) On your original boot drive: - Run format and change the fdisk partition to use 100% of c0t0d0 - Create a 100g slice in c0t0d0s0 - Create a slice in c0t0d0s1 that uses the remaining space # zpool attach rpool c0t1d0s0 c0t0d0s0 # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t0d0s0 # bootadm update-archive # zpool detach rpool c0t1d0s0 # zpool add rpool cache c0t0d0s1 -B -- Brandon High : bhigh at freaks.com
It is cool. Many thanks. It seems the installer of Solaris 10 U8 is more flexible in this aspect which can realize my goal directly. Thanks. Fred -----Original Message----- From: Brandon High [mailto:bhigh at freaks.com] Sent: Wednesday, June 02, 2010 3:40 AM To: Fred Liu Cc: Frank Cusack; zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] SMI lable and EFI label in one disk? On Tue, Jun 1, 2010 at 11:54 AM, Fred Liu <Fred_Liu at issi.com> wrote:> That is true. Any internals about this limitation? How can I realize my goal?You can''t do it using the Caiman installer that comes with the osol dev builds. There are a few ways that you can do it now that the system is installed. If you have a second drive, you can do a bit of a monte to make it work. You basically create a mirror of your boot disk onto another drive, remove the original drive and repartition, add it back as a mirror, then detach the second mirror. You can do it all while the system is up without rebooting. On the second drive (call is c0t1d0): - Run format -e, you''ll need to be able to specify the label type. - Create an fdisk partition of type SOLARIS2 that uses the entire disk - Create a 100g slice in c0t1d0s0. Label the disk as SMI. Make sure that this is the same size as your existing c0t0d0s0. If it''s too small, it won''t work, and if it''s too large it''ll grow the pool size. # zpool attach rpool c0t0d0s0 c0t1d0s0 ... wait for resilver ... # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t1d0s0 # bootadm update-archive # zpool detach rpool c0t0d0s0 (installgrub and bootadm here are just in case there''s a problem and you have to reboot.) On your original boot drive: - Run format and change the fdisk partition to use 100% of c0t0d0 - Create a 100g slice in c0t0d0s0 - Create a slice in c0t0d0s1 that uses the remaining space # zpool attach rpool c0t1d0s0 c0t0d0s0 # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t0d0s0 # bootadm update-archive # zpool detach rpool c0t1d0s0 # zpool add rpool cache c0t0d0s1 -B -- Brandon High : bhigh at freaks.com
Fred Liu
2010-Jun-02 01:39 UTC
[zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?
Thanks. Fred
James C. McPherson
2010-Jun-02 01:57 UTC
[zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?
On 2/06/10 11:39 AM, Fred Liu wrote:> Thanks.No. If you must disable MPxIO, then you do so after installation, using the stmsboot command. James C. McPherson -- Senior Software Engineer, Solaris Oracle http://www.jmcp.homeunix.com/blog
Fred Liu
2010-Jun-02 02:01 UTC
[zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?
Yes. But the output of zpool commands still uses MPxIO naming convention and format command cannot find any disks. Thanks. Fred -----Original Message----- From: James C. McPherson [mailto:jmcp at opensolaris.org] Sent: ???, ?? 02, 2010 9:58 To: Fred Liu Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation? On 2/06/10 11:39 AM, Fred Liu wrote:> Thanks.No. If you must disable MPxIO, then you do so after installation, using the stmsboot command. James C. McPherson -- Senior Software Engineer, Solaris Oracle http://www.jmcp.homeunix.com/blog
James C. McPherson
2010-Jun-02 02:26 UTC
[zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?
On 2/06/10 12:01 PM, Fred Liu wrote:> Yes. But the output of zpool commands still uses MPxIO naming convention> and format command cannot find any disks. _But_ ? What is the problem with ZFS using the device naming system that the system provides it with? Do you mean that you cannot see any plain old targets, or that no disk devices of any sort show up in your host when you are installing? What is your actual problem, and why do you think that turning off MPxIO will solve it? James C. McPherson -- Senior Software Engineer, Solaris Oracle http://www.jmcp.homeunix.com/blog
Fred Liu
2010-Jun-02 05:09 UTC
[zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?
In fact, there is no problem for MPxIO name in technology. It only matters for storage admins to remember the name. I think there is no way to give short aliases to these long tedious MPxIO name. And I just have only one HBA card, so I don''t need multipath indeed. The simple name -- cxtxdx will be much more easier. Furthermore, my ultimate goal is to map the disk in MPxIO path to the actual physical slot position. And If there is a broken HDD, I can easily know to replace which one. BTW, the "luxadm led_blink" may not work in the commodity hardware and only works in Sun''s proprietary disk array. I think it is a common storage for storage admins. How do you replace the broken HDDs in your best practice. Thanks. Fred -----Original Message----- From: James C. McPherson [mailto:jmcp at opensolaris.org] Sent: ???, ?? 02, 2010 10:27 To: Fred Liu Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation? On 2/06/10 12:01 PM, Fred Liu wrote:> Yes. But the output of zpool commands still uses MPxIO naming convention> and format command cannot find any disks. _But_ ? What is the problem with ZFS using the device naming system that the system provides it with? Do you mean that you cannot see any plain old targets, or that no disk devices of any sort show up in your host when you are installing? What is your actual problem, and why do you think that turning off MPxIO will solve it? James C. McPherson -- Senior Software Engineer, Solaris Oracle http://www.jmcp.homeunix.com/blog
Fred Liu
2010-Jun-02 05:11 UTC
[zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?
Fix some typos..... ######################################################################################################### In fact, there is no problem for MPxIO name in technology. It only matters for storage admins to remember the name. I think there is no way to give short aliases to these long tedious MPxIO name. And I just have only one HBA card, so I don''t need multipath indeed. The simple name -- cxtxdx will be much more easier. Furthermore, my ultimate goal is to map the disk in MPxIO path to the actual physical slot position. And If there is a broken HDD, I can easily know to replace which one. BTW, the "luxadm led_blink" may not work in the commodity hardware and only works in Sun''s proprietary disk array. I think it is a common situation for storage admins. **How do you replace the broken HDDs in your best practice?** Thanks. Fred -----Original Message----- From: James C. McPherson [mailto:jmcp at opensolaris.org] Sent: ???, ?? 02, 2010 10:27 To: Fred Liu Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation? On 2/06/10 12:01 PM, Fred Liu wrote:> Yes. But the output of zpool commands still uses MPxIO naming convention> and format command cannot find any disks. _But_ ? What is the problem with ZFS using the device naming system that the system provides it with? Do you mean that you cannot see any plain old targets, or that no disk devices of any sort show up in your host when you are installing? What is your actual problem, and why do you think that turning off MPxIO will solve it? James C. McPherson -- Senior Software Engineer, Solaris Oracle http://www.jmcp.homeunix.com/blog
James C. McPherson
2010-Jun-02 12:03 UTC
[zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?
On 2/06/10 03:11 PM, Fred Liu wrote:> Fix some typos..... > > ######################################################################################################### > > In fact, there is no problem for MPxIO name in technology. > It only matters for storage admins to remember the name.You are correct.> I think there is no way to give short aliases to these long tedious MPxIO name.You are correct that we don''t have aliases. However, I do not agree that the naming is tedious. It gives you certainty about the actual device that you are dealing with, without having to worry about whether you''ve cabled it right.> And I just have only one HBA card, so I don''t need multipath indeed.For SAS and FC-attached devices, we are moving (however slowly) towards having MPxIO on all the time. Please don''t assume that turning on MPxIO requires you to have multiple ports and/or HBAs - for the addressing scheme at least, it does not. Failover.... that''s another matter.> The simple name -- cxtxdx will be much more easier.That naming system is rooted in parallel scsi times. It is not appropriate for SAS and FC environments.> Furthermore, my ultimate goal is to map the disk in MPxIO path> to the actual physical slot> position. And If there is a broken HDD, I can easily know to> replace which one.> BTW, the "luxadm led_blink" may not work in the commodity hardware> and only works in Sun''s proprietary disk array.> > I think it is a common situation for storage admins. > > **How do you replace the broken HDDs in your best practice?**If you are running build 126 or later, then you can take advantage of the behaviour that was added to cfgadm(1m): $ cfgadm -lav c3 c4 Ap_Id Receptacle Occupant Condition Information When Type Busy Phys_Id c3 connected configured unknown unavailable scsi-sas n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi c3::0,0 connected configured unknown Client Device: /dev/dsk/c5t5000CCA00510A7CCd0s0(sd37) unavailable disk-path n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi::0,0 c3::dsk/c3t2d0 connected configured unknown ST3320620AS ST3320620AS unavailable disk n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi::dsk/c3t2d0 c3::dsk/c3t3d0 connected configured unknown ST3320620AS ST3320620AS unavailable disk n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi::dsk/c3t3d0 c3::dsk/c3t4d0 connected configured unknown ST3320620AS ST3320620AS unavailable disk n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi::dsk/c3t4d0 c3::dsk/c3t6d0 connected configured unknown ST3320620AS ST3320620AS unavailable disk n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi::dsk/c3t6d0 c4 connected configured unknown unavailable scsi-sas n /devices/pci at ff,0/pci10de,377 at f/pci1000,3150 at 0:scsi c4::5,0 connected configured unknown Client Device: /dev/dsk/c5t50000F001BB01248d0s0(sd38) unavailable disk-path n /devices/pci at ff,0/pci10de,377 at f/pci1000,3150 at 0:scsi::5,0 c4::6,0 connected configured unknown Client Device: /dev/dsk/c5t50014EE1007EE473d0s0(sd39) unavailable disk-path n /devices/pci at ff,0/pci10de,377 at f/pci1000,3150 at 0:scsi::6,0 c4::dsk/c4t3d0 connected configured unknown ST3320620AS ST3320620AS unavailable disk n /devices/pci at ff,0/pci10de,377 at f/pci1000,3150 at 0:scsi::dsk/c4t3d0 c4::dsk/c4t7d0 connected configured unknown ST3320620AS ST3320620AS unavailable disk n /devices/pci at ff,0/pci10de,377 at f/pci1000,3150 at 0:scsi::dsk/c4t7d0 While the above is a bit unwieldy to read in an email, it does show you the following things: (0) I have SAS and SATA disks (1) I have MPxIO turned on (2) the MPxIO-capable devices are listed with both their "client" or scsi_vhci path, and their traditional cXtYdZ name $ cfgadm -lav c3::0,0 c4::5,0 c4::6,0 Ap_Id Receptacle Occupant Condition Information When Type Busy Phys_Id c3::0,0 connected configured unknown Client Device: /dev/dsk/c5t5000CCA00510A7CCd0s0(sd37) unavailable disk-path n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi::0,0 c4::5,0 connected configured unknown Client Device: /dev/dsk/c5t50000F001BB01248d0s0(sd38) unavailable disk-path n /devices/pci at ff,0/pci10de,377 at f/pci1000,3150 at 0:scsi::5,0 c4::6,0 connected configured unknown Client Device: /dev/dsk/c5t50014EE1007EE473d0s0(sd39) unavailable disk-path n /devices/pci at ff,0/pci10de,377 at f/pci1000,3150 at 0:scsi::6,0 No need to use luxadm. James C. McPherson -- Senior Software Engineer, Solaris Oracle http://www.jmcp.homeunix.com/blog
James C. McPherson
2010-Jun-02 12:03 UTC
[zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?
On 2/06/10 03:11 PM, Fred Liu wrote:> Fix some typos..... > > ######################################################################################################### > > In fact, there is no problem for MPxIO name in technology. > It only matters for storage admins to remember the name.You are correct.> I think there is no way to give short aliases to these long tedious MPxIO name.You are correct that we don''t have aliases. However, I do not agree that the naming is tedious. It gives you certainty about the actual device that you are dealing with, without having to worry about whether you''ve cabled it right.> And I just have only one HBA card, so I don''t need multipath indeed.For SAS and FC-attached devices, we are moving (however slowly) towards having MPxIO on all the time. Please don''t assume that turning on MPxIO requires you to have multiple ports and/or HBAs - for the addressing scheme at least, it does not. Failover.... that''s another matter.> The simple name -- cxtxdx will be much more easier.That naming system is rooted in parallel scsi times. It is not appropriate for SAS and FC environments.> Furthermore, my ultimate goal is to map the disk in MPxIO path> to the actual physical slot> position. And If there is a broken HDD, I can easily know to> replace which one.> BTW, the "luxadm led_blink" may not work in the commodity hardware> and only works in Sun''s proprietary disk array.> > I think it is a common situation for storage admins. > > **How do you replace the broken HDDs in your best practice?**If you are running build 126 or later, then you can take advantage of the behaviour that was added to cfgadm(1m): $ cfgadm -lav c3 c4 Ap_Id Receptacle Occupant Condition Information When Type Busy Phys_Id c3 connected configured unknown unavailable scsi-sas n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi c3::0,0 connected configured unknown Client Device: /dev/dsk/c5t5000CCA00510A7CCd0s0(sd37) unavailable disk-path n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi::0,0 c3::dsk/c3t2d0 connected configured unknown ST3320620AS ST3320620AS unavailable disk n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi::dsk/c3t2d0 c3::dsk/c3t3d0 connected configured unknown ST3320620AS ST3320620AS unavailable disk n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi::dsk/c3t3d0 c3::dsk/c3t4d0 connected configured unknown ST3320620AS ST3320620AS unavailable disk n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi::dsk/c3t4d0 c3::dsk/c3t6d0 connected configured unknown ST3320620AS ST3320620AS unavailable disk n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi::dsk/c3t6d0 c4 connected configured unknown unavailable scsi-sas n /devices/pci at ff,0/pci10de,377 at f/pci1000,3150 at 0:scsi c4::5,0 connected configured unknown Client Device: /dev/dsk/c5t50000F001BB01248d0s0(sd38) unavailable disk-path n /devices/pci at ff,0/pci10de,377 at f/pci1000,3150 at 0:scsi::5,0 c4::6,0 connected configured unknown Client Device: /dev/dsk/c5t50014EE1007EE473d0s0(sd39) unavailable disk-path n /devices/pci at ff,0/pci10de,377 at f/pci1000,3150 at 0:scsi::6,0 c4::dsk/c4t3d0 connected configured unknown ST3320620AS ST3320620AS unavailable disk n /devices/pci at ff,0/pci10de,377 at f/pci1000,3150 at 0:scsi::dsk/c4t3d0 c4::dsk/c4t7d0 connected configured unknown ST3320620AS ST3320620AS unavailable disk n /devices/pci at ff,0/pci10de,377 at f/pci1000,3150 at 0:scsi::dsk/c4t7d0 While the above is a bit unwieldy to read in an email, it does show you the following things: (0) I have SAS and SATA disks (1) I have MPxIO turned on (2) the MPxIO-capable devices are listed with both their "client" or scsi_vhci path, and their traditional cXtYdZ name $ cfgadm -lav c3::0,0 c4::5,0 c4::6,0 Ap_Id Receptacle Occupant Condition Information When Type Busy Phys_Id c3::0,0 connected configured unknown Client Device: /dev/dsk/c5t5000CCA00510A7CCd0s0(sd37) unavailable disk-path n /devices/pci at 0,0/pci10de,376 at a/pci1000,3150 at 0:scsi::0,0 c4::5,0 connected configured unknown Client Device: /dev/dsk/c5t50000F001BB01248d0s0(sd38) unavailable disk-path n /devices/pci at ff,0/pci10de,377 at f/pci1000,3150 at 0:scsi::5,0 c4::6,0 connected configured unknown Client Device: /dev/dsk/c5t50014EE1007EE473d0s0(sd39) unavailable disk-path n /devices/pci at ff,0/pci10de,377 at f/pci1000,3150 at 0:scsi::6,0 No need to use luxadm. James C. McPherson -- Senior Software Engineer, Solaris Oracle http://www.jmcp.homeunix.com/blog
Andrew Gabriel
2010-Jun-02 12:21 UTC
[zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?
An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100602/15330599/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: .Sun_Oracle.gif Type: image/gif Size: 3273 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100602/15330599/attachment.gif> -------------- next part -------------- A non-text attachment was scrubbed... Name: .green-for-email-sig_0.gif Type: image/gif Size: 356 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100602/15330599/attachment-0001.gif>