Ziqi Zhao
2023-Aug-19 22:50 UTC
[Bridge] [PATCH] net: bridge: Fix refcnt issues in dev_ioctl
On Sat, Aug 19, 2023 at 12:25:15PM +0300, Nikolay Aleksandrov wrote: Hi Nik, Thank you so much for reviewing the patch and getting back to me!> IIRC there was no bug, it was a false-positive. The reference is held a bit > longer but then released, so the device is deleted later.> If you reproduced it, is the device later removed or is it really stuck?I ran the reproducer again without the patch and it seems you are correct. It was trying to create a very short-lived bridge, then delete it immediately in the next call. The device in question "wpan4" never showed up when I polled with `ip link` in the VM, so I'd say it did not get stuck.> How would it leak a reference, could you elaborate? > The reference is always taken and always released after the call.This was where I got a bit confused too. The system had a timeout of 140 seconds for the unregister_netdevice check. If the bridge in question was created and deleted repeatedly, the warning would indeed not be an actual reference leak. But how could its reference show up after 140 seconds if the bridge's creation and deletion were all within a couple of milliseconds? So I let the system run for a bit longer with the reproducer, and after ~200 seconds, the kernel crashed and complained that some tasks had been waiting for too long (more than 143 seconds) trying to get hold of the br_ioctl_mutex. This was also quite strange to me, since on the surface it definitely looked like a deadlock, but the strict locking order as I described previously should prevent any deadlocks from happening. Anyways, I decided to test switching up the lock order, since both the refcnt warning and the task stall seemed closely related to the above mentioned locks. When I ran the reproducer again after the patch, both the warning and the stall issue went away. So I guess the patch is still relevant in preventing bugs in some extreme cases -- although the scenario created by the reproducer would probably never happen in real usages? Please let me know whether you have any thoughts on how the above issues were triggered, and what other information I could gather to further demystify this bug. Thank you again for your help! Best regards, Ziqi
Nikolay Aleksandrov
2023-Aug-22 10:40 UTC
[Bridge] [PATCH] net: bridge: Fix refcnt issues in dev_ioctl
On 8/20/23 01:50, Ziqi Zhao wrote:> On Sat, Aug 19, 2023 at 12:25:15PM +0300, Nikolay Aleksandrov wrote: > Hi Nik, > > Thank you so much for reviewing the patch and getting back to me! > >> IIRC there was no bug, it was a false-positive. The reference is held a bit >> longer but then released, so the device is deleted later. > >> If you reproduced it, is the device later removed or is it really stuck? > > I ran the reproducer again without the patch and it seems you are > correct. It was trying to create a very short-lived bridge, then delete > it immediately in the next call. The device in question "wpan4" never > showed up when I polled with `ip link` in the VM, so I'd say it did not > get stuck. > >> How would it leak a reference, could you elaborate? >> The reference is always taken and always released after the call. > > This was where I got a bit confused too. The system had a timeout of > 140 seconds for the unregister_netdevice check. If the bridge in > question was created and deleted repeatedly, the warning would indeed > not be an actual reference leak. But how could its reference show up > after 140 seconds if the bridge's creation and deletion were all within > a couple of milliseconds? > > So I let the system run for a bit longer with the reproducer, and after > ~200 seconds, the kernel crashed and complained that some tasks had > been waiting for too long (more than 143 seconds) trying to get hold of > the br_ioctl_mutex. This was also quite strange to me, since on the > surface it definitely looked like a deadlock, but the strict locking > order as I described previously should prevent any deadlocks from > happening. > > Anyways, I decided to test switching up the lock order, since both the > refcnt warning and the task stall seemed closely related to the above > mentioned locks. When I ran the reproducer again after the patch, both > the warning and the stall issue went away. So I guess the patch is > still relevant in preventing bugs in some extreme cases -- although the > scenario created by the reproducer would probably never happen in real > usages? >Thank you for testing, but we really need to understand what is going on and why the device isn't getting deleted for so long. Currently I don't have the time to debug it properly (I'll be able to next week at the earliest). We can't apply the patch based only on tests without understanding the underlying issue. I'd look into what the reproducer is doing exactly and also check the system state while the deadlock has happened. Also you can list the currently held locks (if CONFIG_LOCKDEP is enabled) via magic sysrq + d for example. See which process is holding them, what are their priorities and so on. Try to build some theory of how a deadlock might happen and then go about proving it. Does the 8021q module have the same problem? It uses similar code to set its hook.> Please let me know whether you have any thoughts on how the above > issues were triggered, and what other information I could gather to > further demystify this bug. Thank you again for your help! > > Best regards, > Ziqi
Maybe Matching Threads
- [Bridge] [PATCH] net: bridge: Fix refcnt issues in dev_ioctl
- [Bridge] [PATCH] net: bridge: Fix refcnt issues in dev_ioctl
- [Bridge] [PATCH] net: bridge: Fix refcnt issues in dev_ioctl
- [Bridge] [syzbot] [net?] unregister_netdevice: waiting for DEV to become free (8)
- [Bridge] [PATCH] can: j1939: prevent deadlock by changing j1939_socks_lock to rwlock