Hello, I am requesting a sanity check on my "driver-level" bridging approach in this e-mail. Please let me know if you see any "gotcha's" in this approach. Problem: * I have a single physical ethernet device on my embedded linux target, and I want to bridge packets to/from a device across on the other side of a shared memory, that also processes ethernet data. * For a variety of reasons, I can't use the "brctl addbr" utility (I've tried!), so I'm implementing the bridging functionality for this specific case, at the driver level. My approach: * Modify the existing ethernet driver by adding some additional code: * Whenever I receive a packet on the physical ethernet: * Just before sending the skb to the network interface (netif_rx(skb)), also push the packet to shmem. * Whenever I send a packet to the physical ethernet: * Before pushing the packet to the physical ethernet device, also push the packet to shared memory. * When I receive a packet from the "shared memory" device: * Build up two copies of an skb * Get the handle to the network device by doing a dev_get_by_name("eth0") * Push one up to the netif (netif_rx(skb)) * Push the other (identical) skb to the physical ethernet device. * dev_put("eth0") when I'm done. * I use netif_stop_queue() and netif_wake_queue() for flow control to the network interface, but for shmem congestion, I just drop packets (for now), since I don't have "easy" queue api's for flow control handy.. Some questions/concerns I have: * is there something in the skb that has "directionality"? * That is, if I take and skb that is intended to go to the network interface (e.g. netif_rx(skb)), and "push" it down to physical device, will something get usynchronized? * Or, if I get an skb from the system (from the hard_start_xmit()), can I "push" it up again to the netif, or will some "directionality" thing in the skb make things go out of whack? * Also, if I push all these additional created skb's up to the interface, or down, is there some "count" between the system netif and the physical device that may get out of whack, and out of sync? * Also, is there something like a "man" pages for the above api's, and also skb questions? (other than the source code)? I wasn't to find a good man page on the above by few minutes "google" In any case, I'm already partially along this approach, but I sure could use any advance warning of potential issues and workarounds! Thanks again. Suresh Bhaskaran -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.linux-foundation.org/pipermail/bridge/attachments/20080826/58353bd9/attachment-0001.htm
Stephen Hemminger
2008-Aug-27 15:17 UTC
[Bridge] Socket Buffer bridging of Ethernet frames
On Tue, 26 Aug 2008 19:29:12 -0700 "Suresh Bhaskaran" <Suresh.Bhaskaran at magnumsemi.com> wrote:> Hello, > > > > I am requesting a sanity check on my "driver-level" bridging approach in > this e-mail. Please let me know if you see any "gotcha's" in this > approach. > > > > Problem: > > * I have a single physical ethernet device on my embedded linux > target, and I want to bridge packets to/from a device across on the > other side of a shared memory, that also processes ethernet data. > * For a variety of reasons, I can't use the "brctl addbr" utility > (I've tried!), so I'm implementing the bridging functionality for this > specific case, at the driver level. > > > > My approach: > > * Modify the existing ethernet driver by adding some additional > code: > * Whenever I receive a packet on the physical ethernet: > > * Just before sending the skb to the network interface > (netif_rx(skb)), also push the packet to shmem. > > * Whenever I send a packet to the physical ethernet: > > * Before pushing the packet to the physical ethernet > device, also push the packet to shared memory. > > * When I receive a packet from the "shared memory" device: > > * Build up two copies of an skb > * Get the handle to the network device by doing a > dev_get_by_name("eth0") > * Push one up to the netif (netif_rx(skb)) > * Push the other (identical) skb to the physical ethernet > device. > * dev_put("eth0") when I'm done. > > * I use netif_stop_queue() and netif_wake_queue() for flow control > to the network interface, but for shmem congestion, I just drop packets > (for now), since I don't have "easy" queue api's for flow control > handy.. > > > > Some questions/concerns I have: > > > > * is there something in the skb that has "directionality"? > > * That is, if I take and skb that is intended to go to the > network interface (e.g. netif_rx(skb)), and "push" it down to physical > device, will something get usynchronized? > * Or, if I get an skb from the system (from the > hard_start_xmit()), can I "push" it up again to the netif, or will some > "directionality" thing in the skb make things go out of whack? > * Also, if I push all these additional created skb's up to > the interface, or down, is there some "count" between the system netif > and the physical device that may get out of whack, and out of sync? > > * Also, is there something like a "man" pages for the above api's, > and also skb questions? (other than the source code)? I wasn't to find > a good man page on the above by few minutes "google" > > > > In any case, I'm already partially along this approach, but I sure could > use any advance warning of potential issues and workarounds! > > > > Thanks again. > > > > Suresh Bhaskaran > > >You just reinvented AF_PACKET with mmap. I don't see why you want to.