Garrett D''Amore
2007-Apr-17 22:42 UTC
[crossbow-discuss] webrev: conversion of dmfe to nemo
I''ve gone ahead and done a conversion of dmfe to GLDv3 (nemo). I actually have a dmfe device on SPARC (in this case its on a SPARC laptop), so I figured this would be beneficial. I''d like to have folks review the work at http://cr.grommit.com/~gdamore/dmfe_gldv3/webrev A few quick notes: * I nixed the redundant "mii" kstats... nemo already tracks them. * This will make dmfe a DLPI style 1 provider as well. (A good thing, IMO, DLPI style 2 is a "bug".) * A few kstats got "lost" since Nemo doesn''t support them... the remfault kstats, the runt packets kstat, and maybe one or two others. * DMFE doesn''t support multiple rings, so I didn''t bother with "mac_resource_add". However, it could in theory support polling, but since it appears that the polling framework for crossbow isn''t integrated yet, I didn''t add it. (Its unclear whether its worth doing this for a 100Mbps device anyway.) * DMFE could easily support multiple unicast addresses. If there is value in this, I can go back and add the necessary bits to support it. (I''m thinking this could be useful for VNICs.) * I''d love to replace the dmfe-custom loopback ioctls with standards sys/netlb.h ioctls. However, I''m not sure if any consumers are going to be impacted. * As a result of other kstat related changes, its possible that the interpretation of certain values might not be identical, e.g. link_duplex, etc. (But as a bonus, dladm show-dev now properly reports link information!) I''m thinking that I''ll probably run this by ARC as a self-approved fasttrack. Ideas? -- Garrett
Peter Memishian
2007-Apr-17 23:24 UTC
[crossbow-discuss] webrev: conversion of dmfe to nemo
> I''d like to have folks review the work at> http://cr.grommit.com/~gdamore/dmfe_gldv3/webrev > > * This will make dmfe a DLPI style 1 provider as well. (A good > thing, IMO, DLPI style 2 is a "bug".) Yes. FWIW, all the Clearview /dev/net nodes are DLPI style-1 *only*. Since those will be what applications interact with in the future, we''re on our way to getting rid of "style-2 disease". > * I''d love to replace the dmfe-custom loopback ioctls with standards > sys/netlb.h ioctls. However, I''m not sure if any consumers are going to > be impacted. IIRC, switching to sys/netlb.h would allow SunVTS coverage. -- meem
Nicolas Droux
2007-Apr-18 00:07 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Garrett D''Amore wrote:> I''ve gone ahead and done a conversion of dmfe to GLDv3 (nemo). I > actually have a dmfe device on SPARC (in this case its on a SPARC > laptop), so I figured this would be beneficial. > > I''d like to have folks review the work at > http://cr.grommit.com/~gdamore/dmfe_gldv3/webrevGreat to see another driver ported to Nemo.> * DMFE doesn''t support multiple rings, so I didn''t bother with > "mac_resource_add". However, it could in theory support polling, but > since it appears that the polling framework for crossbow isn''t > integrated yet, I didn''t add it. (Its unclear whether its worth doing > this for a 100Mbps device anyway.)If it''s only for 100Mbs that sounds like a reasonable approach until the full polling implementation is provided by Crossbow.> * DMFE could easily support multiple unicast addresses. If there is > value in this, I can go back and add the necessary bits to support it. > (I''m thinking this could be useful for VNICs.)That would be definitely useful for VNICs. The VNIC driver in the Crossbow gate currently takes advantage of that capability if it is supported by the underlying NIC. Thanks, Nicolas. -- Nicolas Droux - Solaris Networking - Sun Microsystems, Inc. droux at sun.com - http://blogs.sun.com/droux
Garrett D''Amore
2007-Apr-18 04:20 UTC
[crossbow-discuss] webrev: conversion of dmfe to nemo
Peter Memishian wrote:> > I''d like to have folks review the work at > > http://cr.grommit.com/~gdamore/dmfe_gldv3/webrev > > > > * This will make dmfe a DLPI style 1 provider as well. (A good > > thing, IMO, DLPI style 2 is a "bug".) > > Yes. FWIW, all the Clearview /dev/net nodes are DLPI style-1 *only*. > Since those will be what applications interact with in the future, > we''re on our way to getting rid of "style-2 disease". > > > * I''d love to replace the dmfe-custom loopback ioctls with standards > > sys/netlb.h ioctls. However, I''m not sure if any consumers are going to > > be impacted. > > IIRC, switching to sys/netlb.h would allow SunVTS coverage. >Yes, I believe that is true. What I''m not sure of, is whether there is a custom SunVTS module in place for dmfe. (It wouldn''t surprise to learn that there is.) Is SunVTS open sourced? :-) When last I looked at SunVTS source (back in 2001 or 2002) the loopback ioctls were specific to each driver, and they were special cased in SunVTS. Btw, I am looking for _formal_ review feedback... i.e. for folks that I can cite in an RTI request. :-) -- Garrett> -- > meem > _______________________________________________ > crossbow-discuss mailing list > crossbow-discuss at opensolaris.org > http://opensolaris.org/mailman/listinfo/crossbow-discuss >
Garrett D''Amore
2007-Apr-18 04:25 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Nicolas Droux wrote:> Garrett D''Amore wrote: >> I''ve gone ahead and done a conversion of dmfe to GLDv3 (nemo). I >> actually have a dmfe device on SPARC (in this case its on a SPARC >> laptop), so I figured this would be beneficial. >> >> I''d like to have folks review the work at >> http://cr.grommit.com/~gdamore/dmfe_gldv3/webrev > > Great to see another driver ported to Nemo.Want to do the others as well... eri and hme are next on my hit list. They will get a _lot_ smaller as a result. There''s been request for qfe (and also for qfe sources to be open) with support for qfe on x86. However, those sources are not open yet, for reasons beyond my understanding at the moment. Likewise for gem. Cassini is probably the most pressing conversion, but also the most painful/risky to do, because it supports so many features. I might be willing to convert it, if I had hardware readily available. FYI, my own afe driver, which is GLDv2 (Solaris 8 DDI only) is soon to be integrated. I''m not converting to nemo until _after_ integration, as I do not want to cause a reset on the testing that has already taken place prior to its integration.> >> * DMFE doesn''t support multiple rings, so I didn''t bother with >> "mac_resource_add". However, it could in theory support polling, >> but since it appears that the polling framework for crossbow isn''t >> integrated yet, I didn''t add it. (Its unclear whether its worth >> doing this for a 100Mbps device anyway.) > > If it''s only for 100Mbs that sounds like a reasonable approach until > the full polling implementation is provided by Crossbow. > >> * DMFE could easily support multiple unicast addresses. If there >> is value in this, I can go back and add the necessary bits to support >> it. (I''m thinking this could be useful for VNICs.) > > That would be definitely useful for VNICs. The VNIC driver in the > Crossbow gate currently takes advantage of that capability if it is > supported by the underlying NIC.Are there any sample tests that I can use to validate the MULTIADDRESS capability? I''m thinking that I''d add that as an RFE after I commit the initial nemo conversion. Btw, have you reviewed the actual code? I''m looking for reviewers that I can list in an RTI.... -- Garrett
Peter Memishian
2007-Apr-18 04:41 UTC
[crossbow-discuss] webrev: conversion of dmfe to nemo
> Btw, I am looking for _formal_ review feedback... i.e. for folks that I> can cite in an RTI request. :-) I wish I had the time right now, sorry :-( -- meem
Oliver Yang
2007-Apr-18 06:48 UTC
[networking-discuss] Re: [crossbow-discuss] webrev: conversion of dmfe to nemo
Garrett D''Amore wrote:> Peter Memishian wrote: >> > I''d like to have folks review the work at > >> http://cr.grommit.com/~gdamore/dmfe_gldv3/webrev >> > > * This will make dmfe a DLPI style 1 provider as well. (A >> good > thing, IMO, DLPI style 2 is a "bug".) >> >> Yes. FWIW, all the Clearview /dev/net nodes are DLPI style-1 *only*. >> Since those will be what applications interact with in the future, >> we''re on our way to getting rid of "style-2 disease". >> >> > * I''d love to replace the dmfe-custom loopback ioctls with >> standards > sys/netlb.h ioctls. However, I''m not sure if any >> consumers are going to > be impacted. >> >> IIRC, switching to sys/netlb.h would allow SunVTS coverage. >> > > Yes, I believe that is true. What I''m not sure of, is whether there > is a custom SunVTS module in place for dmfe. (It wouldn''t surprise to > learn that there is.) Is SunVTS open sourced? :-)The lookback test of Sun VTS is quite special. It seems it requires code changes in Sun VTS for new driver supporting. I''m not sure why? Does anybody know about it? -- Cheers, ---------------------------------------------------------------------- Oliver Yang | Oliver.Yang at Sun.COM | x82229 | Work from home
Garrett D''Amore
2007-Apr-18 07:14 UTC
[networking-discuss] Re: [crossbow-discuss] webrev: conversion of dmfe to nemo
Oliver Yang wrote:> Garrett D''Amore wrote: >> Peter Memishian wrote: >>> > I''d like to have folks review the work at > >>> http://cr.grommit.com/~gdamore/dmfe_gldv3/webrev >>> > > * This will make dmfe a DLPI style 1 provider as well. (A >>> good > thing, IMO, DLPI style 2 is a "bug".) >>> >>> Yes. FWIW, all the Clearview /dev/net nodes are DLPI style-1 *only*. >>> Since those will be what applications interact with in the future, >>> we''re on our way to getting rid of "style-2 disease". >>> >>> > * I''d love to replace the dmfe-custom loopback ioctls with >>> standards > sys/netlb.h ioctls. However, I''m not sure if any >>> consumers are going to > be impacted. >>> >>> IIRC, switching to sys/netlb.h would allow SunVTS coverage. >>> >> >> Yes, I believe that is true. What I''m not sure of, is whether there >> is a custom SunVTS module in place for dmfe. (It wouldn''t surprise >> to learn that there is.) Is SunVTS open sourced? :-) > The lookback test of Sun VTS is quite special. It seems it requires > code changes in Sun VTS for new driver supporting. I''m not sure why? > Does anybody know about it?The last time I looked, it had a fixed list of drivers, along with some ioctls. There is supposed to be a "common" ioctl (which is really derived from the original GEM ioctls) for it, but I still think SunVTS isn''t smart enough to realize whether a driver supports the loopback ioctls or not. Another wrinkle is that of course only a few drivers support the netlb.h ioctls, since they were never formally published anywhere. This is another tunable that I''d like brussels to just take care of. Making drivers handle ioctls for every kind of tunable is really, really ugly. :-) -- Garrett> >
Oliver Yang
2007-Apr-18 07:58 UTC
[networking-discuss] Re: [crossbow-discuss] webrev: conversion of dmfe to nemo
Garrett D''Amore wrote:> Oliver Yang wrote: >> Garrett D''Amore wrote: >>> Peter Memishian wrote: >>>> > I''d like to have folks review the work at > >>>> http://cr.grommit.com/~gdamore/dmfe_gldv3/webrev >>>> > > * This will make dmfe a DLPI style 1 provider as well. >>>> (A good > thing, IMO, DLPI style 2 is a "bug".) >>>> >>>> Yes. FWIW, all the Clearview /dev/net nodes are DLPI style-1 *only*. >>>> Since those will be what applications interact with in the future, >>>> we''re on our way to getting rid of "style-2 disease". >>>> >>>> > * I''d love to replace the dmfe-custom loopback ioctls with >>>> standards > sys/netlb.h ioctls. However, I''m not sure if any >>>> consumers are going to > be impacted. >>>> >>>> IIRC, switching to sys/netlb.h would allow SunVTS coverage. >>>> >>> >>> Yes, I believe that is true. What I''m not sure of, is whether there >>> is a custom SunVTS module in place for dmfe. (It wouldn''t surprise >>> to learn that there is.) Is SunVTS open sourced? :-) >> The lookback test of Sun VTS is quite special. It seems it requires >> code changes in Sun VTS for new driver supporting. I''m not sure why? >> Does anybody know about it? > > The last time I looked, it had a fixed list of drivers, along with > some ioctls. There is supposed to be a "common" ioctl (which is > really derived from the original GEM ioctls) for it, but I still think > SunVTS isn''t smart enough to realize whether a driver supports the > loopback ioctls or not. > > Another wrinkle is that of course only a few drivers support the > netlb.h ioctls, since they were never formally published anywhere. > This is another tunable that I''d like brussels to just take care of.I think it should be a public property for NIC driver, although few drivers support it. Maybe brussels project can provide this feature by dladm get property. -- Cheers, ---------------------------------------------------------------------- Oliver Yang | Oliver.Yang at Sun.COM | x82229 | Work from home
On 4/17/07, Garrett D''Amore <garrett at damore.org> wrote:> > I''d like to have folks review the work at > http://cr.grommit.com/~gdamore/dmfe_gldv3/webrev >Garrett, I had a look at your code and it looks fine; you may want to double check the locking though since Nemo''s and GLD''s locking are different in a few places. Paul -- Paul Durrant http://www.linkedin.com/in/pdurrant
Garrett D''Amore
2007-Apr-18 15:12 UTC
[crossbow-discuss] webrev: conversion of dmfe to nemo
Paul Durrant wrote:> On 4/17/07, Garrett D''Amore <garrett at damore.org> wrote: >> >> I''d like to have folks review the work at >> http://cr.grommit.com/~gdamore/dmfe_gldv3/webrev >> > > Garrett, > > I had a look at your code and it looks fine; you may want to double > check the locking though since Nemo''s and GLD''s locking are different > in a few places. > > Paul >Hmm.... I figured the same rules apply... don''t hold any locks when calling any of Nemo''s functions. (That''s the normal GLD rule.) Given that rule, all other locks are "leaf" locks, at least from the perspective of code outside of the driver. The only places dmfe calls into the mac layer are, mac_rx, mac_link_update, and mac_tx_update. At each of these points, dmfe is not holding any locks whatsoever. I''m pretty sure therefore that its correct. -- Garrett
Garrett D''Amore
2007-Apr-18 15:30 UTC
[networking-discuss] Re: [crossbow-discuss] webrev: conversion of dmfe to nemo
Oliver Yang wrote:> Garrett D''Amore wrote: >> >> >> Another wrinkle is that of course only a few drivers support the >> netlb.h ioctls, since they were never formally published anywhere. >> This is another tunable that I''d like brussels to just take care of. > I think it should be a public property for NIC driver, although few > drivers support it. Maybe brussels project can provide this feature by > dladm get property. >Yes, please. Anyone from the brussels project paying attention? -- Garrett
sowmini.varadhan at sun.com
2007-Apr-18 15:36 UTC
[networking-discuss] Re: [crossbow-discuss] webrev: conversion of dmfe to nemo
On (04/18/07 08:30), Garrett D''Amore wrote:> Oliver Yang wrote: > >Garrett D''Amore wrote: > >> > >> > >>Another wrinkle is that of course only a few drivers support the > >>netlb.h ioctls, since they were never formally published anywhere. > >>This is another tunable that I''d like brussels to just take care of. > >I think it should be a public property for NIC driver, although few > >drivers support it. Maybe brussels project can provide this feature by > >dladm get property. > > > > Yes, please. Anyone from the brussels project paying attention? >yes, actually Oliver Yang is himself involved in contributing to Brussels discussions :-) --Sowmini
Nicolas Droux
2007-Apr-18 17:29 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Garrett D''Amore wrote:> Nicolas Droux wrote: >> Garrett D''Amore wrote: >>> I''ve gone ahead and done a conversion of dmfe to GLDv3 (nemo). I >>> actually have a dmfe device on SPARC (in this case its on a SPARC >>> laptop), so I figured this would be beneficial. >>> >>> I''d like to have folks review the work at >>> http://cr.grommit.com/~gdamore/dmfe_gldv3/webrev >> >> Great to see another driver ported to Nemo. > > Want to do the others as well... eri and hme are next on my hit list. > They will get a _lot_ smaller as a result. There''s been request for qfe > (and also for qfe sources to be open) with support for qfe on x86. > However, those sources are not open yet, for reasons beyond my > understanding at the moment.I would suggest focusing on the ones that are still heavily used instead of the ones for legacy devices, and instead spend time on making the GLDv3 interface officially public so that more "interesting" drivers can be ported to GLDv3 :-)> Likewise for gem. Cassini is probably the most pressing conversion, but > also the most painful/risky to do, because it supports so many > features. I might be willing to convert it, if I had hardware readily > available.Yeah, that would be more challenging. Also ce does not live in ON and I don''t think it is opensourced.> FYI, my own afe driver, which is GLDv2 (Solaris 8 DDI only) is soon to > be integrated. I''m not converting to nemo until _after_ integration, as > I do not want to cause a reset on the testing that has already taken > place prior to its integration.That''s unfortunate. If the end-goal is to have a GLDv3 version, the appropriate retesting will have to be done anyway. These extra cycles could be spent now and the conversion done with. (This is pointing out another problem which is the level of pain needed to properly QA a driver before integration in ON, this should be really automated to lower the barrier of entry for new drivers in ON.)> Are there any sample tests that I can use to validate the MULTIADDRESS > capability? I''m thinking that I''d add that as an RFE after I commit the > initial nemo conversion.Unfortunately not today. One way to do this today would be to use your driver on a system running Crossbow and use VNICs to exercise that code.> Btw, have you reviewed the actual code? I''m looking for reviewers that > I can list in an RTI....Not yet, but I will. Nicolas.> > -- Garrett > >-- Nicolas Droux - Solaris Networking - Sun Microsystems, Inc. droux at sun.com - http://blogs.sun.com/droux
Garrett D''Amore
2007-Apr-18 17:38 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Nicolas Droux wrote:> > > Garrett D''Amore wrote: >> Nicolas Droux wrote: >>> Garrett D''Amore wrote: >>>> I''ve gone ahead and done a conversion of dmfe to GLDv3 (nemo). I >>>> actually have a dmfe device on SPARC (in this case its on a SPARC >>>> laptop), so I figured this would be beneficial. >>>> >>>> I''d like to have folks review the work at >>>> http://cr.grommit.com/~gdamore/dmfe_gldv3/webrev >>> >>> Great to see another driver ported to Nemo. >> >> Want to do the others as well... eri and hme are next on my hit >> list. They will get a _lot_ smaller as a result. There''s been >> request for qfe (and also for qfe sources to be open) with support >> for qfe on x86. However, those sources are not open yet, for reasons >> beyond my understanding at the moment. > > I would suggest focusing on the ones that are still heavily used > instead of the ones for legacy devices, and instead spend time on > making the GLDv3 interface officially public so that more > "interesting" drivers can be ported to GLDv3 :-)Heavily used depends on what systems you have at hand. hme and eri are _widely_ deployed, and _widely_ used. E.g. all UltraSPARC II/IIi/IIe based systems shipped with either eri or hme. This includes "big" systems like E10k, E6500, etc. The other "popular" nic is qfe, but that code doesn''t live in ON where I can readily access it. (The argument for doing qfe is probably quite compelling, as a lot of qfe NICs were sold, even well after gigabit started to become popular.) In the meantime, converting the legacy nics is actually pretty darned easy. It takes only ~1 day of coding effort to convert hme or eri. (I did an hme->gldv2 conversion 4 years ago, which nobody picked up. But now it starts to look interesting again with nemo. I''ve already had a person ask me about doing an hme conversion, and several asking for qfe.)> >> Likewise for gem. Cassini is probably the most pressing conversion, >> but also the most painful/risky to do, because it supports so many >> features. I might be willing to convert it, if I had hardware >> readily available. > > Yeah, that would be more challenging. Also ce does not live in ON and > I don''t think it is opensourced.Right. Those are the reasons why I''m not chomping at the bit to do ce right now. If I was reasonably confident that my work to convert ce would result in ce getting into ON, then I''d probably just go for it. But I don''t have that feeling right now.> >> FYI, my own afe driver, which is GLDv2 (Solaris 8 DDI only) is soon >> to be integrated. I''m not converting to nemo until _after_ >> integration, as I do not want to cause a reset on the testing that >> has already taken place prior to its integration. > > That''s unfortunate. If the end-goal is to have a GLDv3 version, the > appropriate retesting will have to be done anyway. These extra cycles > could be spent now and the conversion done with. (This is pointing out > another problem which is the level of pain needed to properly QA a > driver before integration in ON, this should be really automated to > lower the barrier of entry for new drivers in ON.)The testing cycles have _already_ been spent, is my point. So any change at this point causes a test reset. I think the barrier to entry for a "new" driver is higher than modifications to an existing driver. Once it goes in, I''ll quickly adapt it, because there are a bunch of GLDv3 features I want to make use of, including things like mac_link_update(), multiaddress support, etc.> >> Are there any sample tests that I can use to validate the >> MULTIADDRESS capability? I''m thinking that I''d add that as an RFE >> after I commit the initial nemo conversion. > > Unfortunately not today. One way to do this today would be to use your > driver on a system running Crossbow and use VNICs to exercise that code.Okay, I will probably need to start playing around with VNICs soon, anyway.> >> Btw, have you reviewed the actual code? I''m looking for reviewers >> that I can list in an RTI.... > > Not yet, but I will.Thanks. -- Garrett> > Nicolas. > >> >> -- Garrett >> >> >
Garrett D''Amore
2007-Apr-18 17:39 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Andrew Gallatin wrote:> Nicolas Droux writes: > > of the ones for legacy devices, and instead spend time on making the > > GLDv3 interface officially public so that more "interesting" drivers can > > be ported to GLDv3 :-) > > As the author of a 10GbE driver which constrained to GLDv2 due to lack > of public GLDv3 interfaces, I whole-heartedly second this! >Unfortunately, there isn''t anything _I_ can personally due to fix this. Believe me, I would far far rather have had nemo interfaces available for my own public GLDv2 drivers. -- Garrett
Darren.Reed at Sun.COM
2007-Apr-18 18:30 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Nicolas Droux wrote:> > > Garrett D''Amore wrote: > >> Nicolas Droux wrote: >> >>> Garrett D''Amore wrote: >>> >>>> I''ve gone ahead and done a conversion of dmfe to GLDv3 (nemo). I >>>> actually have a dmfe device on SPARC (in this case its on a SPARC >>>> laptop), so I figured this would be beneficial. >>>> >>>> I''d like to have folks review the work at >>>> http://cr.grommit.com/~gdamore/dmfe_gldv3/webrev >>> >>> >>> Great to see another driver ported to Nemo. >> >> >> Want to do the others as well... eri and hme are next on my hit >> list. They will get a _lot_ smaller as a result. There''s been >> request for qfe (and also for qfe sources to be open) with support >> for qfe on x86. However, those sources are not open yet, for reasons >> beyond my understanding at the moment. > > > I would suggest focusing on the ones that are still heavily used > instead of the ones for legacy devices, and instead spend time on > making the GLDv3 interface officially public so that more > "interesting" drivers can be ported to GLDv3 :-)hme and qfe are widely used cards, especially qfe as quad port cards (10/100/1G) are not common and they provide a good way to bring in a lot of networking. eri is also quite common - all of the ultra5/10 boxes have them, not to mention netra x1, etc. Darren
Peter Memishian
2007-Apr-18 22:55 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
> I''d personally like to see ce myself since it is still present in lots of> systems Sun sells, as well as their current quad ethernet card, though it > sounds like it''d be a more complex task than the others. Yes, there are a lot of obscure tunables with `ce'' that make it a challenging port to do without regressions. That said, as part of Clearview''s Nemo Unification work, ce can be used as-is and will appear like any GLDv3 link -- e.g., you can create GLDv3-based aggregations, VLANs and so forth. So far, the performance numbers look good too. -- meem
Garrett D''Amore
2007-Apr-18 23:07 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Peter Memishian wrote:> > I''d personally like to see ce myself since it is still present in lots of > > systems Sun sells, as well as their current quad ethernet card, though it > > sounds like it''d be a more complex task than the others. > > Yes, there are a lot of obscure tunables with `ce'' that make it a > challenging port to do without regressions. That said, as part of > Clearview''s Nemo Unification work, ce can be used as-is and will appear > like any GLDv3 link -- e.g., you can create GLDv3-based aggregations, > VLANs and so forth. So far, the performance numbers look good too. > > -- > meem >Those tunables should still be available... e.g. via NDD or driver.conf, or whatever. Nemo (at least until brussels) does nothing about that. The complexity for ce comes from the fact that it does some other "scary" things, e.g. optimizations for zero copy, etc. I suspect a lot of the "custom" stuff in ce, though, is much more cleanly handled by nemo. If someone wants me to work on a ce->nemo port, and wants to assure that we can integrate the results into ON, then just say the word, and I''ll start working on it. Ditto for qfe or gem. Unless integrating into ON, I don''t think using GLDv3 is appropriate at this time. -- Garrett
Rajagopal Kunhappan
2007-Apr-18 23:08 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Garrett D''Amore wrote:> Peter Memishian wrote: >> > I''d personally like to see ce myself since it is still present in >> lots of >> > systems Sun sells, as well as their current quad ethernet card, >> though it >> > sounds like it''d be a more complex task than the others. >> >> Yes, there are a lot of obscure tunables with `ce'' that make it a >> challenging port to do without regressions. That said, as part of >> Clearview''s Nemo Unification work, ce can be used as-is and will appear >> like any GLDv3 link -- e.g., you can create GLDv3-based aggregations, >> VLANs and so forth. So far, the performance numbers look good too. >> >> -- >> meem >> > > Those tunables should still be available... e.g. via NDD or > driver.conf, or whatever. Nemo (at least until brussels) does nothing > about that. > > The complexity for ce comes from the fact that it does some other > "scary" things, e.g. optimizations for zero copy, etc. I suspect a > lot of the "custom" stuff in ce, though, is much more cleanly handled > by nemo. > > If someone wants me to work on a ce->nemo port, and wants to assure > that we can integrate the results into ON, then just say the word, and > I''ll start working on it.I believe Bill Watson was doing the port of ce driver if I remember correctly. You may want to check base with him (CC''d him). -krgopi> > Ditto for qfe or gem. > > Unless integrating into ON, I don''t think using GLDv3 is appropriate > at this time. > > -- Garrett > _______________________________________________ > crossbow-discuss mailing list > crossbow-discuss at opensolaris.org > http://opensolaris.org/mailman/listinfo/crossbow-discuss
Peter Memishian
2007-Apr-18 23:35 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
> Those tunables should still be available... e.g. via NDD or driver.conf,> or whatever. Nemo (at least until brussels) does nothing about that. The problem isn''t with the tunables themselves, it''s with the massive matrix of possible workloads they are designed to accommodate. That is, a lot of those tunables have been introduced to solve some particular customer escalation -- in rewriting the driver to be GLDv3-based, a lot of those issues would need to be reexamined. (And yes, ce also has other tricky things, like its own custom worker-thread model that can be optionally enabled.) Personally, I don''t think bringing drivers like ce under Nemo natively is worth the effort -- especially since Clearview UV will make them transparently work under GLDv3. Instead, I agree with Nicolas that I''d rather see our time spent opening up the GLDv3 APIs so that we can avoid future porting efforts :-) -- meem
Nicolas Droux
2007-Apr-19 05:31 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
On Apr 18, 2007, at 11:38 AM, Garrett D''Amore wrote:> Heavily used depends on what systems you have at hand. hme and eri > are _widely_ deployed, and _widely_ used. E.g. all UltraSPARC II/ > IIi/IIe based systems shipped with either eri or hme. This > includes "big" systems like E10k, E6500, etc. > > The other "popular" nic is qfe, but that code doesn''t live in ON > where I can readily access it. (The argument for doing qfe is > probably quite compelling, as a lot of qfe NICs were sold, even > well after gigabit started to become popular.)We can probably compile a long list of drivers which are commonly used on the various platforms that have shipped over the years. The vast majority of these drivers are however not even 1Gb/s NICs, and should work fine under the Nemo unification softmac shim, which will bring them under the Nemo framework without porting needed. Nicolas. -- Nicolas Droux - Solaris Networking - Sun Microsystems, Inc. droux at sun.com - http://blogs.sun.com/droux
Nicolas Droux
2007-Apr-19 05:39 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
On Apr 18, 2007, at 11:39 AM, Garrett D''Amore wrote:> Andrew Gallatin wrote: >> Nicolas Droux writes: >> > of the ones for legacy devices, and instead spend time on >> making the > GLDv3 interface officially public so that more >> "interesting" drivers can > be ported to GLDv3 :-) >> >> As the author of a 10GbE driver which constrained to GLDv2 due to >> lack >> of public GLDv3 interfaces, I whole-heartedly second this! >> > > Unfortunately, there isn''t anything _I_ can personally due to fix > this. Believe me, I would far far rather have had nemo interfaces > available for my own public GLDv2 drivers.This is an OpenSolaris issue as well, so this is something we could tackle as part of an OpenSolaris project with community contributors. Since there''s a great interest on this I can get the ball rolling on that front and send a proposal for a new project. Your help will definitely be greatly appreciated :-) Nicolas. -- Nicolas Droux - Solaris Networking - Sun Microsystems, Inc. droux at sun.com - http://blogs.sun.com/droux
Garrett D''Amore
2007-Apr-19 05:58 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Nicolas Droux wrote:> > On Apr 18, 2007, at 11:38 AM, Garrett D''Amore wrote: > >> Heavily used depends on what systems you have at hand. hme and eri >> are _widely_ deployed, and _widely_ used. E.g. all UltraSPARC >> II/IIi/IIe based systems shipped with either eri or hme. This >> includes "big" systems like E10k, E6500, etc. >> >> The other "popular" nic is qfe, but that code doesn''t live in ON >> where I can readily access it. (The argument for doing qfe is >> probably quite compelling, as a lot of qfe NICs were sold, even well >> after gigabit started to become popular.) > > We can probably compile a long list of drivers which are commonly used > on the various platforms that have shipped over the years. The vast > majority of these drivers are however not even 1Gb/s NICs, and should > work fine under the Nemo unification softmac shim, which will bring > them under the Nemo framework without porting needed.Here are the compelling reasons to consider moving some of these legacy drivers to Nemo: 1) these drivers have a lot of cut-n-paste code... e.g. DLPI handling in hme, eri, probably contribute over 5K lines _each_. Most of that code is also duplicated in both GLDv2 and in nemo. More lines of code == more chances for bugs == more support headaches. 2) nemo-ification automatically gets quite a bit of "free" performance in fewer CPU cycles used (direct function calls, etc.) 3) it gets us one step closer to being able to eliminate legacy APIs... possibly making the _frameworks_ simpler. (For example, some Consolidation Private and Contracted Private interfaces for things like HW checksum offload, DLPI fastpath, etc. can pretty much get eradicated once the last Sun consumers of them go away.) 4) centralizing functionality for stuff like DLPI handling means reduces long term support costs for drivers like eri, hme, etc. 5) unifies administrative tasks... e.g. these drivers can adopt things like Brussels, will "just work" with dladm (they don''t today!) etc. 6) ultimately leading us one step closer towards nixing "ndd" (see point #3 above, again) (Also removing duplicated kstat and ndd code in _these_ drivers.) 7) paves the way for these drivers to support additional crossbow features like Multiple Unicast Address support, interrupt blanking (which may or may not be useful with 100Mb links... but imagine an older system like an E450 with a few dozen 100Mbit links on it...) 8) as another item on #2, nemoification gets free multi-data-transmit (and receive!), yielding better performance as well. 9) ultimately, also eradicating special case code in places like SunVTS and other test suites, that have special cases for devices like hme, gem, etc. (e.g. using custom ioctls to set loopback modes.) 10) making these drivers DLPI style 1, bringing us much closer to removing the long standing DLPI style 2 "bug". Finally, for most of these legacy drivers, the effort to convert is quite small. On the order of a man day. Seriously. With all these benefits at such a low cost, why _wouldn''t_ we do it? Historically the complaint was that GLDv2 wouldn''t support everything these drivers wanted, or couldn''t keep up in terms of performance (that was an outright lie, as my analysis some 4 years ago showed). With nemo, this is the way to go. -- Garrett> > Nicolas. > > --Nicolas Droux - Solaris Networking - Sun Microsystems, Inc. > droux at sun.com - http://blogs.sun.com/droux > > > > _______________________________________________ > crossbow-discuss mailing list > crossbow-discuss at opensolaris.org > http://opensolaris.org/mailman/listinfo/crossbow-discuss
Peter Memishian
2007-Apr-19 07:04 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Again, I''m the last person to fight against code cleanup, but it''s not quite as cut-and-dried as you''re making it to be. BTW, most of the points you make are also in strong support of making GLDv3 public, which I am quite in support of. > 1) these drivers have a lot of cut-n-paste code... e.g. DLPI > handling in hme, eri, probably contribute over 5K lines _each_. Most of > that code is also duplicated in both GLDv2 and in nemo. More lines of > code == more chances for bugs == more support headaches. Yes, we''re well aware of the swill. If you look closely, you''ll also find all sorts of one-off bugfixes in various drivers for various customer escalations -- we need to be careful not to regress any of those -- and when necessary to pull those fixes into the framework or into other drivers. (As an example of these sort of oddities, check out eri_mk_mblk_tail_space(), which has now spread to some other drivers.) > 2) nemo-ification automatically gets quite a bit of "free" > performance in fewer CPU cycles used (direct function calls, etc.) Quite possibly, but the lion''s share of these are not high-performance drivers. > 3) it gets us one step closer to being able to eliminate legacy > APIs... possibly making the _frameworks_ simpler. (For example, some > Consolidation Private and Contracted Private interfaces for things like > HW checksum offload, DLPI fastpath, etc. can pretty much get eradicated > once the last Sun consumers of them go away.) It doesn''t really matter how much closer we get unless we can deal with the third-party driver problem (and some third parties won''t even consider a GLDv3 conversion until they don''t have to support S8/S9). > 4) centralizing functionality for stuff like DLPI handling means > reduces long term support costs for drivers like eri, hme, etc. Yes, this is why we did GLD in the first place. > 5) unifies administrative tasks... e.g. these drivers can adopt > things like Brussels, will "just work" with dladm (they don''t today!) etc. It will "just work" with Clearview UV too. Actually, all of them will just work, no porting required. > 6) ultimately leading us one step closer towards nixing "ndd" (see > point #3 above, again) (Also removing duplicated kstat and ndd code in > _these_ drivers.) See third party driver problem again. > 7) paves the way for these drivers to support additional crossbow > features like Multiple Unicast Address support, interrupt blanking > (which may or may not be useful with 100Mb links... but imagine an older > system like an E450 with a few dozen 100Mbit links on it...) We get this with Clearview UV. > 8) as another item on #2, nemoification gets free > multi-data-transmit (and receive!), yielding better performance as well. Likewise. > 9) ultimately, also eradicating special case code in places like > SunVTS and other test suites, that have special cases for devices like > hme, gem, etc. (e.g. using custom ioctls to set loopback modes.) That seems orthogonal to the GLDv3 work. GLDv3 does not specify the loopback testing model AFAIK. > 10) making these drivers DLPI style 1, bringing us much closer to > removing the long standing DLPI style 2 "bug". Again, this is part of Clearview UV. All drivers will have DLPI style 1 nodes in /dev/net. No porting necessary. > Finally, for most of these legacy drivers, the effort to convert is > quite small. On the order of a man day. Seriously. With all these > benefits at such a low cost, why _wouldn''t_ we do it? The cost is not just (or even primarily) in the porting effort. It''s in the regression testing, the code churn, and the possible bugs that fallout of the work. Again, there''s nothing I like better than clean code. But given that Clearview UV will benefit all non-GLDv3 drivers without porting them and also allow a number of framework simplifications, and that third-party drivers will stand in the way of ripping out legacy support, I''d rather see us focus on looking ahead, rather than rewriting history. -- meem
Sebastien Roy
2007-Apr-19 13:59 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Nicolas Droux wrote:> This is an OpenSolaris issue as well, so this is something we could > tackle as part of an OpenSolaris project with community contributors. > Since there''s a great interest on this I can get the ball rolling on > that front and send a proposal for a new project. Your help will > definitely be greatly appreciated :-)I''d like to be involved with this effort as well. I think we should also be clear that we''re only talking about promoting the Nemo MAC driver interfaces, and not the MAC client or DLS client interfaces at this point. -Seb
Garrett D''Amore
2007-Apr-19 15:26 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Peter Memishian wrote:> Again, I''m the last person to fight against code cleanup, but it''s not > quite as cut-and-dried as you''re making it to be. BTW, most of the points > you make are also in strong support of making GLDv3 public, which I am > quite in support of. > > > 1) these drivers have a lot of cut-n-paste code... e.g. DLPI > > handling in hme, eri, probably contribute over 5K lines _each_. Most of > > that code is also duplicated in both GLDv2 and in nemo. More lines of > > code == more chances for bugs == more support headaches. > > Yes, we''re well aware of the swill. If you look closely, you''ll also find > all sorts of one-off bugfixes in various drivers for various customer > escalations -- we need to be careful not to regress any of those -- and > when necessary to pull those fixes into the framework or into other > drivers. (As an example of these sort of oddities, check out > eri_mk_mblk_tail_space(), which has now spread to some other drivers.) >That particular hack seems to address a problem in the ndd support. Yech. If this driver were GLDv3, then when it makes the move to Brussels, this hack could go away!> > 2) nemo-ification automatically gets quite a bit of "free" > > performance in fewer CPU cycles used (direct function calls, etc.) > > Quite possibly, but the lion''s share of these are not high-performance > drivers. >Even on low performance drivers, per-packet overheads can be significant. Not everyone has multigigahertz cpus to allocate to each NIC. There are still a lot of ~500MHz-ish systems out there, and a lot of them have several NICs or at least several NIC ports (e.g. a qfe, etc.) With link aggregation, even these "low performance" NICs are still quite useful on higher performance systems.> > 3) it gets us one step closer to being able to eliminate legacy > > APIs... possibly making the _frameworks_ simpler. (For example, some > > Consolidation Private and Contracted Private interfaces for things like > > HW checksum offload, DLPI fastpath, etc. can pretty much get eradicated > > once the last Sun consumers of them go away.) > > It doesn''t really matter how much closer we get unless we can deal with > the third-party driver problem (and some third parties won''t even consider > a GLDv3 conversion until they don''t have to support S8/S9). >How many 3rd party drivers are using some of the ugly internals like DL_CAPABILITY_REQ (which is not a public interface), or DL_CONTROL_REQ, for example? What about DL_IOC_HDR_INFO? Or DLIOCNATIVE? (Okay, I know a few use DL_IOC_HDR_INFO... but they _shouldn''t_.) These Sun-private DLPI extensions can simply go away if all the consumers of them go away. For some of these cases, we have 100% control over that at Sun.> > 4) centralizing functionality for stuff like DLPI handling means > > reduces long term support costs for drivers like eri, hme, etc. > > Yes, this is why we did GLD in the first place. > > > 5) unifies administrative tasks... e.g. these drivers can adopt > > things like Brussels, will "just work" with dladm (they don''t today!) etc. > > It will "just work" with Clearview UV too. Actually, all of them will > just work, no porting required. >Does this mean that Clearview is going to know about all the special case stats that each of these drivers exports? (I.e. is there going to be a switch table entry in Clearview for hme, qfe, gem, etc.?) And, at what level are we going to have put brains in clearview to do things like "ndd" for these drivers? Yech.> > 6) ultimately leading us one step closer towards nixing "ndd" (see > > point #3 above, again) (Also removing duplicated kstat and ndd code in > > _these_ drivers.) > > See third party driver problem again. >The NDD driver ioctl interfaces have never been published, and aren''t part of the DDI. I hope there aren''t 3rd party drivers that are relying on that particular interface. (To my knowledge, my own "afe" and "mxfe" drivers are the only unbundled drivers that support "ndd", and that is by "circumstance"... I provide my own tools which use an IOCTL set that happens to be "compatible" with ndd. Once afe is integrated, I''ll make "ndd" the official way to tune, instead of providing my own "etherdiag" command. The point is, if its a Sun-internal-only interface, then it should be possible for us to fix it.> > 7) paves the way for these drivers to support additional crossbow > > features like Multiple Unicast Address support, interrupt blanking > > (which may or may not be useful with 100Mb links... but imagine an older > > system like an E450 with a few dozen 100Mbit links on it...) > > We get this with Clearview UV. >The only way Clearview can do multiaddress mode is by putting the interface in promiscuous mode. This causes all kinds of performance penalty in the underlying drivers. (From the fact that the underlying drivers perform an extra "dupmsg" to the fact that you take and process _every_ packet on the wire.) I sincerely hope you''re not suggesting that this is equivalent to having the native ability in the driver to put multiple unicast addresses in the device''s mac address hardware filter.> > 8) as another item on #2, nemoification gets free > > multi-data-transmit (and receive!), yielding better performance as well. > > Likewise. >This doesn''t get you multidata transmit and receive _at the driver level_. It just makes the stack think that you''re getting it. Sure, its better than nothing, but its also not as good as having that all the way down at the driver level. (E.g. the driver isn''t doing multiple putnext() calls, or multiple gld_recv() calls. Likewise for tx...)> > 9) ultimately, also eradicating special case code in places like > > SunVTS and other test suites, that have special cases for devices like > > hme, gem, etc. (e.g. using custom ioctls to set loopback modes.) > > That seems orthogonal to the GLDv3 work. GLDv3 does not specify the > loopback testing model AFAIK. >Its coming with brussels, I think. That may mean its only available to GLDv3 drivers.> > 10) making these drivers DLPI style 1, bringing us much closer to > > removing the long standing DLPI style 2 "bug". > > Again, this is part of Clearview UV. All drivers will have DLPI style 1 > nodes in /dev/net. No porting necessary. >Fair enough. But will there also be /dev/hme nodes? Will clearview make tools like snoop automatically use the style 1 nodes? Will there be a /dev/net/hme0 if I have not set up a vanity name? For GLDv3 I get /dev/hme0 by default. This also makes it easier/automatic for sysadmins _today_, without having to retrain myself to use "clearview safe" names later.> > Finally, for most of these legacy drivers, the effort to convert is > > quite small. On the order of a man day. Seriously. With all these > > benefits at such a low cost, why _wouldn''t_ we do it? > > The cost is not just (or even primarily) in the porting effort. It''s in > the regression testing, the code churn, and the possible bugs that fallout > of the work. Again, there''s nothing I like better than clean code. But > given that Clearview UV will benefit all non-GLDv3 drivers without porting > them and also allow a number of framework simplifications, and that > third-party drivers will stand in the way of ripping out legacy support, > I''d rather see us focus on looking ahead, rather than rewriting history. >But, by moving such large amounts of code to the GLDv3, you actually greatly simplify the drivers thereby making support much, much easier for them. Ultimately this reduces long term headaches, unless you''re going to propose that the existing drivers are flawless and will never need to be touched again. I also contend that the "churn" is actually pretty low, since most of what is happening is that we are going to be _removing_ code and replacing it with common code, that should also be well tested. One possible risk is that this process will expose new issues in the Nemo framework itself. Well, if that happens, that is a good thing, because we have resources dedicated to finding and fixing nemo bugs, for the benefit of all drivers. Frankly, the biggest risk to eri/hme in the conversion is in the reporting of the kstats. I consider that a pretty low impact risk, and I''m pretty sure I can get it "right" the first time. -- Garrett
Nicolas Droux
2007-Apr-19 18:43 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Sebastien Roy wrote:> Nicolas Droux wrote: >> This is an OpenSolaris issue as well, so this is something we could >> tackle as part of an OpenSolaris project with community contributors. >> Since there''s a great interest on this I can get the ball rolling on >> that front and send a proposal for a new project. Your help will >> definitely be greatly appreciated :-) > > I''d like to be involved with this effort as well. I think we should > also be clear that we''re only talking about promoting the Nemo MAC > driver interfaces, and not the MAC client or DLS client interfaces at > this point.Great. I''m not ruling out opening the MAC API as well as some point, but that would be part of a different effort. Nicolas. -- Nicolas Droux - Solaris Networking - Sun Microsystems, Inc. droux at sun.com - http://blogs.sun.com/droux
Garrett D''Amore
2007-Apr-19 18:57 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Nicolas Droux wrote:> > > Sebastien Roy wrote: >> Nicolas Droux wrote: >>> This is an OpenSolaris issue as well, so this is something we could >>> tackle as part of an OpenSolaris project with community >>> contributors. Since there''s a great interest on this I can get the >>> ball rolling on that front and send a proposal for a new project. >>> Your help will definitely be greatly appreciated :-) >> >> I''d like to be involved with this effort as well. I think we should >> also be clear that we''re only talking about promoting the Nemo MAC >> driver interfaces, and not the MAC client or DLS client interfaces at >> this point. > > Great. > > I''m not ruling out opening the MAC API as well as some point, but that > would be part of a different effort.So, just let us know what we can do to help out with this effort. -- Garrett> > Nicolas. >
Peter Memishian
2007-Apr-19 21:23 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
> > It doesn''t really matter how much closer we get unless we can deal with> > the third-party driver problem (and some third parties won''t even consider > > a GLDv3 conversion until they don''t have to support S8/S9). > > How many 3rd party drivers are using some of the ugly internals like > DL_CAPABILITY_REQ (which is not a public interface), or DL_CONTROL_REQ, > for example? DL_CONTROL_REQ is an internal IPsec thing that is tied to Venus, which AFAIK is not open-source (and is already EOL''d) -- so it seems independent of this porting effort. DL_CAPABILITY_REQ is more of a question mark. > What about DL_IOC_HDR_INFO? All third-party drivers that I know of use DL_IOC_HDR_INFO. > Or DLIOCNATIVE? No drivers implement DLIOCNATIVE -- it''s handled exclusively inside GLDv3 and it will stay that way. That said, it''s documented in dlpi(7P) for use by applications like WireShark (and in fact, I have a patch to submit to the WireShark guys that turns it on). > These Sun-private DLPI extensions can simply go away if all the > consumers of them go away. For some of these cases, we have 100% > control over that at Sun. See above -- even if you ported all of the open-sourced drivers to GLDv3, you''d still be left with these interfaces. > Does this mean that Clearview is going to know about all the special > case stats that each of these drivers exports? (I.e. is there going to > be a switch table entry in Clearview for hme, qfe, gem, etc.?) IIRC, softmac knows about the common names for kstats, and may try more than one when interacting with an underlying driver. It doesn''t know one-offs for various drivers. We are not trying to handle the ndd case -- that''s Brussels territory. > The NDD driver ioctl interfaces have never been published, and aren''t > part of the DDI. I hope there aren''t 3rd party drivers that are relying > on that particular interface. There are -- e.g., Syskonnect''s drivers support them. > The point is, if its a Sun-internal-only interface, then it should be > possible for us to fix it. It leaked out a long time ago. > The only way Clearview can do multiaddress mode is by putting the > interface in promiscuous mode. This causes all kinds of performance > penalty in the underlying drivers. (From the fact that the underlying > drivers perform an extra "dupmsg" to the fact that you take and process > _every_ packet on the wire.) I sincerely hope you''re not suggesting > that this is equivalent to having the native ability in the driver to > put multiple unicast addresses in the device''s mac address hardware filter. I was speaking more generally of Crossbow features. Yes, some things may not be tenable, though IIRC Crossbow already reverts to promiscuous mode when more addresses are needed than the hardware can directly support. > This doesn''t get you multidata transmit and receive _at the driver > level_. It just makes the stack think that you''re getting it. Sure, > its better than nothing, but its also not as good as having that all the > way down at the driver level. (E.g. the driver isn''t doing multiple > putnext() calls, or multiple gld_recv() calls. Likewise for tx...) First, my recollection is that GLDv3 does not support MDT -- only LSO. Further, as I recall, the plan was to remove MDT support and use LSO consistently. So I''m not sure this is much of a concern. > > > 9) ultimately, also eradicating special case code in places like > > > SunVTS and other test suites, that have special cases for devices like > > > hme, gem, etc. (e.g. using custom ioctls to set loopback modes.) > > > > That seems orthogonal to the GLDv3 work. GLDv3 does not specify the > > loopback testing model AFAIK. > > Its coming with brussels, I think. That may mean its only available to > GLDv3 drivers. I don''t think so. It should be easy for any driver to support those ioctls. I''d welcome adding some common utility routines in the kernel to make that easy. > > > 10) making these drivers DLPI style 1, bringing us much closer to > > > removing the long standing DLPI style 2 "bug". > > > > Again, this is part of Clearview UV. All drivers will have DLPI style 1 > > nodes in /dev/net. No porting necessary. > > Fair enough. But will there also be /dev/hme nodes? Will clearview > make tools like snoop automatically use the style 1 nodes? Yes. All DLPI applications in ON will use libdlpi (in fact, a few already do, including snoop), which will look in /dev/net before /dev. We have also made the libdlpi API available for third-party use (there are complete manpages), though we recognize that it may take some time for things to be ported to use it. (BTW, libdlpi rips out tons of code :-) > Will there be a /dev/net/hme0 if I have not set up a vanity name? Yes, that will be the default name. > But, by moving such large amounts of code to the GLDv3, you actually > greatly simplify the drivers thereby making support much, much easier > for them. Ultimately this reduces long term headaches, unless you''re > going to propose that the existing drivers are flawless and will never > need to be touched again. Of course they''re not flawless -- and this is a good argument, but to me it''s more an argument for getting GLDv3 out the door than it is for rewriting the past. -- meem
Sebastien Roy
2007-Apr-19 22:07 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Garrett D''Amore wrote:> I''ve gone ahead and done a conversion of dmfe to GLDv3 (nemo). I > actually have a dmfe device on SPARC (in this case its on a SPARC > laptop), so I figured this would be beneficial. > > I''d like to have folks review the work at > http://cr.grommit.com/~gdamore/dmfe_gldv3/webrevThis is refreshing. :-) Only a couple of comments: usr/src/uts/sun4u/dmfe/Makefile * No Comments usr/src/uts/sun4u/io/dmfe/dmfe_main.c * 33: is there no longer any kind of version number displayed by modinfo for dmfe? * 214: the dmfe_m_getcapab function unconditionally always returns B_FALSE for all capabilities, so I''m wondering what the utility is in providing an MC_GETCAPAB entrypoint at all for this driver. * 596, 599: Why comment those out? You can still keep track of these stats in dmfe_t even if GLDv3 doesn''t yet ask you for these values, right? * 1158: Without context about pre-existing code having once been there to process VLAN headers, this comment seems odd. I''d just blow this comment away. * 1349: cstyle; indent by 4. usr/src/uts/sun4u/io/dmfe/dmfe_mii.c * No Comments usr/src/uts/sun4u/sys/dmfe_impl.h * No Comments -Seb
Garrett D''Amore
2007-Apr-19 22:19 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Sebastien Roy wrote:> Garrett D''Amore wrote: >> I''ve gone ahead and done a conversion of dmfe to GLDv3 (nemo). I >> actually have a dmfe device on SPARC (in this case its on a SPARC >> laptop), so I figured this would be beneficial. >> >> I''d like to have folks review the work at >> http://cr.grommit.com/~gdamore/dmfe_gldv3/webrev > > This is refreshing. :-) Only a couple of comments:Thanks.> > usr/src/uts/sun4u/dmfe/Makefile > > * No Comments > > > usr/src/uts/sun4u/io/dmfe/dmfe_main.c > > * 33: is there no longer any kind of version number displayed by modinfo > for dmfe?Yes. There was some consensus that the version numbers displayed by modinfo were something short of useless, and that they would eventually be removed. Rather than continue to update the modinfo version string, I''m proactively removing it as I happen to touch relevant code.> > * 214: the dmfe_m_getcapab function unconditionally always returns > B_FALSE for all capabilities, so I''m wondering what the utility is in > providing an MC_GETCAPAB entrypoint at all for this driver.Almost none. :-) Except that it documents that a) I considered adding one, and b) what kinds of capabilities can be added in the near future. I expect I''ll probably add multiaddress support shortly enough. (Mostly I have to set up a test environment to test it, which is why I didn''t just do it now.)> > * 596, 599: Why comment those out? You can still keep track of these > stats in dmfe_t even if GLDv3 doesn''t yet ask you for these values, > right?Actually, I''ve fixed this, now that GLDv3 will be able to report them. See PSARC 2007/220. :-)> > * 1158: Without context about pre-existing code having once been there > to process VLAN headers, this comment seems odd. I''d just blow this > comment away.Okay.> > * 1349: cstyle; indent by 4.Okay.> > > usr/src/uts/sun4u/io/dmfe/dmfe_mii.c > > * No Comments > > > usr/src/uts/sun4u/sys/dmfe_impl.h > > * No Comments > > -SebThanks for the review. -- Garrett> _______________________________________________ > crossbow-discuss mailing list > crossbow-discuss at opensolaris.org > http://opensolaris.org/mailman/listinfo/crossbow-discuss
Garrett D''Amore
2007-Apr-19 22:26 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Peter Memishian wrote:> > > But, by moving such large amounts of code to the GLDv3, you actually > > greatly simplify the drivers thereby making support much, much easier > > for them. Ultimately this reduces long term headaches, unless you''re > > going to propose that the existing drivers are flawless and will never > > need to be touched again. > > Of course they''re not flawless -- and this is a good argument, but to me > it''s more an argument for getting GLDv3 out the door than it is for > rewriting the past. >I think this point is one sticking issue. That is, I think you are arguing that these two approaches are mutually exclusive. I think that is a bad assumption. In particular, there are probably a lot more folks who can help with a port of a driver to GLDv3 than there are that can contribute meaningfully to making GLDv3 open. -- Garrett
Garrett D''Amore
2007-Apr-20 21:08 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Sebastien Roy wrote:> Garrett D''Amore wrote: >> I''ve gone ahead and done a conversion of dmfe to GLDv3 (nemo). I >> actually have a dmfe device on SPARC (in this case its on a SPARC >> laptop), so I figured this would be beneficial. >> >> I''d like to have folks review the work at >> http://cr.grommit.com/~gdamore/dmfe_gldv3/webrev > > This is refreshing. :-) Only a couple of comments: > > usr/src/uts/sun4u/dmfe/Makefile > > * No Comments > > > usr/src/uts/sun4u/io/dmfe/dmfe_main.c > > * 33: is there no longer any kind of version number displayed by modinfo > for dmfe? > > * 214: the dmfe_m_getcapab function unconditionally always returns > B_FALSE for all capabilities, so I''m wondering what the utility is in > providing an MC_GETCAPAB entrypoint at all for this driver. > > * 596, 599: Why comment those out? You can still keep track of these > stats in dmfe_t even if GLDv3 doesn''t yet ask you for these values, > right? > > * 1158: Without context about pre-existing code having once been there > to process VLAN headers, this comment seems odd. I''d just blow this > comment away. > > * 1349: cstyle; indent by 4.Just as a follow-up, I nuked that DEBUG message altogether. DTrace should be used instead. I want to go on a mission to rip out a bunch of the debug stuff from these drivers. I did a bunch for dmfe, but I missed that once, and really it was a good candidate (since all the info it provides is easily obtainable with the normal entry/exit probes in DTrace.) -- Garrett> > > usr/src/uts/sun4u/io/dmfe/dmfe_mii.c > > * No Comments > > > usr/src/uts/sun4u/sys/dmfe_impl.h > > * No Comments > > -Seb
Darren.Reed at Sun.COM
2007-Apr-20 21:34 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Garrett D''Amore wrote:> ... > Just as a follow-up, I nuked that DEBUG message altogether. DTrace > should be used instead. > > I want to go on a mission to rip out a bunch of the debug stuff from > these drivers. I did a bunch for dmfe, but I missed that once, and > really it was a good candidate (since all the info it provides is > easily obtainable with the normal entry/exit probes in DTrace.)I''ve filed 6547496 to request dtrace probes be added where there were once debug printf''s. As people go through other drivers, similar RFEs or work (time permitting) should be done. Darren
Garrett D''Amore
2007-Apr-20 22:01 UTC
[crossbow-discuss] Re: [networking-discuss] webrev: conversion of dmfe to nemo
Darren.Reed at Sun.COM wrote:> Garrett D''Amore wrote: > >> ... >> Just as a follow-up, I nuked that DEBUG message altogether. DTrace >> should be used instead. >> >> I want to go on a mission to rip out a bunch of the debug stuff from >> these drivers. I did a bunch for dmfe, but I missed that once, and >> really it was a good candidate (since all the info it provides is >> easily obtainable with the normal entry/exit probes in DTrace.) > > > I''ve filed 6547496 to request dtrace probes be added where there were > once debug printf''s. As people go through other drivers, similar RFEs > or work (time permitting) should be done. > > Darren >Thanks. I didn''t remove all the debug printfs, mostly because I didn''t want the task of converting them to DTrace. :-) But some of the ones that I felt were covered "by default" (via entry/exit probes) I removed. I did even more of this on my eri conversion. -- Garrett