search for: mutex_unlock

Displaying 20 results from an estimated 2162 matches for "mutex_unlock".

2013 Apr 25
9
[PATCH v11 0/4] tcm_vhost hotplug
Changes in v11 - Drop change log histroy in commit log Changes in v10 - Drop comments about lun - Add Enable VIRTIO_SCSI_F_HOTPLUG to this series Changes in v9 - Drop tcm_vhost_check_feature - Add Refactor the lock nesting rule to this sereis Asias He (4): tcm_vhost: Refactor the lock nesting rule tcm_vhost: Add hotplug/hotunplug support tcm_vhost: Add ioctl to get and set events missed
2013 Apr 25
9
[PATCH v11 0/4] tcm_vhost hotplug
Changes in v11 - Drop change log histroy in commit log Changes in v10 - Drop comments about lun - Add Enable VIRTIO_SCSI_F_HOTPLUG to this series Changes in v9 - Drop tcm_vhost_check_feature - Add Refactor the lock nesting rule to this sereis Asias He (4): tcm_vhost: Refactor the lock nesting rule tcm_vhost: Add hotplug/hotunplug support tcm_vhost: Add ioctl to get and set events missed
2006 Sep 12
3
dtrace reports different counts depending on what is being traced
We are using following dtrace script to analyze an application. However we see different number of lwp_yield and yield calls reported depending on whether we trace mutex_lock or mutex_unlock or not. The number reported when we are not tracing mutex_lock or mutex_unlock is higher. What could be going on here and which one is the correct number. This is on S10 U2. Thanks, Rao. #!/usr/sbin/dtrace -qs syscall::yield:entry /execname=="rrcpd"/ { @[probefunc]=count()...
2023 Feb 23
5
[PATCH 0/5] vhost-scsi: Fix management operation hangs
The following patches were made over Linus tree and also apply over mst tree's vhost branch. The patches fix an issue where management operations like LUN mapping/unmapping and device addition hang for 30 seconds or up to N minutes depending on the device. The problem is that we use a global mutex to protect the list of tpgs but we hold that mutex during those management operations. So if you
2023 Mar 21
8
[PATCH v2 0/7] vhost-scsi: Fix crashes and management op hangs
The following patches were made over Linus tree. The patches fix 3 issues: 1. If a user performs LIO LUN unmapping before the endpoint has been cleared then we can end up trying to free a bogus tmf struct if the TMF is still exucuting when we do the unmap. 2. If vhost_scsi_setup_vq_cmds fails we can leave the tpg->vhost_scsi pointer set and we can end up trying to access a freed struct. 3.
2013 Apr 25
6
[PATCH v10 0/4] tcm_vhost hotplug
Asias He (4): tcm_vhost: Refactor the lock nesting rule tcm_vhost: Add hotplug/hotunplug support tcm_vhost: Add ioctl to get and set events missed flag tcm_vhost: Enable VIRTIO_SCSI_F_HOTPLUG drivers/vhost/tcm_vhost.c | 262 +++++++++++++++++++++++++++++++++++++++++++--- drivers/vhost/tcm_vhost.h | 13 +++ 2 files changed, 259 insertions(+), 16 deletions(-) -- 1.8.1.4
2013 Apr 25
6
[PATCH v10 0/4] tcm_vhost hotplug
Asias He (4): tcm_vhost: Refactor the lock nesting rule tcm_vhost: Add hotplug/hotunplug support tcm_vhost: Add ioctl to get and set events missed flag tcm_vhost: Enable VIRTIO_SCSI_F_HOTPLUG drivers/vhost/tcm_vhost.c | 262 +++++++++++++++++++++++++++++++++++++++++++--- drivers/vhost/tcm_vhost.h | 13 +++ 2 files changed, 259 insertions(+), 16 deletions(-) -- 1.8.1.4
2019 Sep 25
2
[PATCH v2 03/27] drm/dp_mst: Destroy MSTBs asynchronously
...stb->mgr; > - struct drm_dp_mst_port *port, *tmp; > - bool wake_tx = false; > > - mutex_lock(&mgr->lock); > - list_for_each_entry_safe(port, tmp, &mstb->ports, next) { > - list_del(&port->next); > - drm_dp_mst_topology_put_port(port); > - } > - mutex_unlock(&mgr->lock); > - > - /* drop any tx slots msg */ > - mutex_lock(&mstb->mgr->qlock); > - if (mstb->tx_slots[0]) { > - mstb->tx_slots[0]->state = DRM_DP_SIDEBAND_TX_TIMEOUT; > - mstb->tx_slots[0] = NULL; > - wake_tx = true; > - } > - if (mstb...
2023 Mar 02
1
[PATCH v2 7/8] vdpa_sim: replace the spinlock with a mutex to protect the state
...&vdpasim->vqs[idx]; bool old_ready; - spin_lock(&vdpasim->lock); + mutex_lock(&vdpasim->mutex); old_ready = vq->ready; vq->ready = ready; if (vq->ready && !old_ready) { vdpasim_queue_ready(vdpasim, idx); } - spin_unlock(&vdpasim->lock); + mutex_unlock(&vdpasim->mutex); } static bool vdpasim_get_vq_ready(struct vdpa_device *vdpa, u16 idx) @@ -299,9 +299,9 @@ static int vdpasim_set_vq_state(struct vdpa_device *vdpa, u16 idx, struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx]; struct vringh *vrh = &vq->vring; - spin_lo...
2014 Sep 09
2
mutex
...time, 'cat /sys/class/misc/hw_random/rng_*' Result: cat process will get stuck, it will return if we kill dd process. We have some static variables (eg, current_rng, data_avail, etc) in hw_random/core.c, they are protected by rng_mutex. I try to workaround this issue by undelay(100) after mutex_unlock() in rng_dev_read(). This gives chance for hwrng_attr_*_show() to get mutex. This patch also contains some cleanup, moving some code out of mutex protection. Do you have some suggestion? Thanks. diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c index aa30a25..fa69020 10...
2014 Sep 09
2
mutex
...time, 'cat /sys/class/misc/hw_random/rng_*' Result: cat process will get stuck, it will return if we kill dd process. We have some static variables (eg, current_rng, data_avail, etc) in hw_random/core.c, they are protected by rng_mutex. I try to workaround this issue by undelay(100) after mutex_unlock() in rng_dev_read(). This gives chance for hwrng_attr_*_show() to get mutex. This patch also contains some cleanup, moving some code out of mutex protection. Do you have some suggestion? Thanks. diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c index aa30a25..fa69020 10...
2013 May 03
5
[PATCH 0/5] vhost-scsi cleanup
Asias He (5): vhost-scsi: Remove unnecessary forward struct vhost_scsi declaration vhost-scsi: Rename struct vhost_scsi *s to *vs vhost-scsi: Make func indention more consistent vhost-scsi: Rename struct tcm_vhost_tpg *tv_tpg to *tpg vhost-scsi: Rename struct tcm_vhost_cmd *tv_cmd to *cmd drivers/vhost/scsi.c | 469 +++++++++++++++++++++++++++------------------------ 1 file changed,
2013 May 03
5
[PATCH 0/5] vhost-scsi cleanup
Asias He (5): vhost-scsi: Remove unnecessary forward struct vhost_scsi declaration vhost-scsi: Rename struct vhost_scsi *s to *vs vhost-scsi: Make func indention more consistent vhost-scsi: Rename struct tcm_vhost_tpg *tv_tpg to *tpg vhost-scsi: Rename struct tcm_vhost_cmd *tv_cmd to *cmd drivers/vhost/scsi.c | 469 +++++++++++++++++++++++++++------------------------ 1 file changed,
2019 Jun 28
0
[PATCH v2 2/3] vsock/virtio: stop workers during the .remove()
...st_lock); mutex_lock(&vsock->rx_lock); + + if (!vsock->rx_run) + goto out; + while (!list_empty(&pkts)) { struct virtio_vsock_pkt *pkt; @@ -102,6 +109,7 @@ static void virtio_transport_loopback_work(struct work_struct *work) virtio_transport_recv_pkt(pkt); } +out: mutex_unlock(&vsock->rx_lock); } @@ -130,6 +138,9 @@ virtio_transport_send_pkt_work(struct work_struct *work) mutex_lock(&vsock->tx_lock); + if (!vsock->tx_run) + goto out; + vq = vsock->vqs[VSOCK_VQ_TX]; for (;;) { @@ -188,6 +199,7 @@ virtio_transport_send_pkt_work(struct wo...
2019 Jul 05
0
[PATCH v3 2/3] vsock/virtio: stop workers during the .remove()
...st_lock); mutex_lock(&vsock->rx_lock); + + if (!vsock->rx_run) + goto out; + while (!list_empty(&pkts)) { struct virtio_vsock_pkt *pkt; @@ -102,6 +109,7 @@ static void virtio_transport_loopback_work(struct work_struct *work) virtio_transport_recv_pkt(pkt); } +out: mutex_unlock(&vsock->rx_lock); } @@ -130,6 +138,9 @@ virtio_transport_send_pkt_work(struct work_struct *work) mutex_lock(&vsock->tx_lock); + if (!vsock->tx_run) + goto out; + vq = vsock->vqs[VSOCK_VQ_TX]; for (;;) { @@ -188,6 +199,7 @@ virtio_transport_send_pkt_work(struct wo...
2019 Sep 27
1
[PATCH v2 03/27] drm/dp_mst: Destroy MSTBs asynchronously
...ke_tx = false; > > > > > > - mutex_lock(&mgr->lock); > > > - list_for_each_entry_safe(port, tmp, &mstb->ports, next) { > > > - list_del(&port->next); > > > - drm_dp_mst_topology_put_port(port); > > > - } > > > - mutex_unlock(&mgr->lock); > > > - > > > - /* drop any tx slots msg */ > > > - mutex_lock(&mstb->mgr->qlock); > > > - if (mstb->tx_slots[0]) { > > > - mstb->tx_slots[0]->state = DRM_DP_SIDEBAND_TX_TIMEOUT; > > > - mstb->tx_slots...
2013 Feb 05
0
[PATCH v3] tcm_vhost: Multi-target support
...{ struct tcm_vhost_tport *tv_tport; struct tcm_vhost_tpg *tv_tpg; - int index; + bool match = false; + int index, ret; mutex_lock(&vs->dev.mutex); /* Verify that ring has been setup correctly. */ @@ -754,7 +783,6 @@ static int vhost_scsi_set_endpoint( return -EFAULT; } } - mutex_unlock(&vs->dev.mutex); mutex_lock(&tcm_vhost_mutex); list_for_each_entry(tv_tpg, &tcm_vhost_list, tv_tpg_list) { @@ -769,30 +797,33 @@ static int vhost_scsi_set_endpoint( } tv_tport = tv_tpg->tport; - if (!strcmp(tv_tport->tport_name, t->vhost_wwpn) && -...
2013 Feb 05
0
[PATCH v3] tcm_vhost: Multi-target support
...{ struct tcm_vhost_tport *tv_tport; struct tcm_vhost_tpg *tv_tpg; - int index; + bool match = false; + int index, ret; mutex_lock(&vs->dev.mutex); /* Verify that ring has been setup correctly. */ @@ -754,7 +783,6 @@ static int vhost_scsi_set_endpoint( return -EFAULT; } } - mutex_unlock(&vs->dev.mutex); mutex_lock(&tcm_vhost_mutex); list_for_each_entry(tv_tpg, &tcm_vhost_list, tv_tpg_list) { @@ -769,30 +797,33 @@ static int vhost_scsi_set_endpoint( } tv_tport = tv_tpg->tport; - if (!strcmp(tv_tport->tport_name, t->vhost_wwpn) && -...
2011 Nov 11
1
[RFC] kvm tools: Implement multiple VQ for virtio-net
...int tap_fd; char tap_name[IFNAMSIZ]; @@ -78,17 +76,22 @@ static void *virtio_net_rx_thread(void *p) struct net_dev *ndev = p; u16 out, in; u16 head; - int len; + int len, queue_num; + + mutex_lock(&ndev->mutex); + queue_num = ndev->rx_vq_num * 2; + ndev->tx_vq_num++; + mutex_unlock(&ndev->mutex); kvm = ndev->kvm; - vq = &ndev->vqs[VIRTIO_NET_RX_QUEUE]; + vq = &ndev->vqs[queue_num]; while (1) { - mutex_lock(&ndev->io_rx_lock); + mutex_lock(&ndev->io_lock[queue_num]); if (!virt_queue__available(vq)) - pthread_cond_wait(&amp...
2011 Nov 11
1
[RFC] kvm tools: Implement multiple VQ for virtio-net
...int tap_fd; char tap_name[IFNAMSIZ]; @@ -78,17 +76,22 @@ static void *virtio_net_rx_thread(void *p) struct net_dev *ndev = p; u16 out, in; u16 head; - int len; + int len, queue_num; + + mutex_lock(&ndev->mutex); + queue_num = ndev->rx_vq_num * 2; + ndev->tx_vq_num++; + mutex_unlock(&ndev->mutex); kvm = ndev->kvm; - vq = &ndev->vqs[VIRTIO_NET_RX_QUEUE]; + vq = &ndev->vqs[queue_num]; while (1) { - mutex_lock(&ndev->io_rx_lock); + mutex_lock(&ndev->io_lock[queue_num]); if (!virt_queue__available(vq)) - pthread_cond_wait(&amp...