search for: 1634,8

Displaying 14 results from an estimated 14 matches for "1634,8".

Did you mean: 1634,9
2020 Aug 13
2
Accumulating CPU load from Xorg process with DRI3
...urs of uptime X already eating some CPU: op - 01:30:49 up 2:45, 1 user, load average: 1,12, 0,93, 0,84 Tasks: 210 total, 1 running, 209 sleeping, 0 stopped, 0 zombie %Cpu(s): 12,1 us, 3,9 sy, 0,0 ni, 81,7 id, 0,7 wa, 0,0 hi, 1,6 si, 0,0 st MiB Mem : 11875,3 total, 6416,4 free, 1634,8 used, 3824,1 buff/cache MiB Swap: 1145,0 total, 1145,0 free, 0,0 used. 9969,7 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1198 root 20 0 146160 78828 28160 S 35,8 0,6 30:41.37 Xorg 1285 guest 20 0 59776 17332 13756...
2014 Oct 14
4
[PATCH RFC] virtio_net: enable tx interrupt
...et_free_queues(struct virtnet_info *vi) { int i; - for (i = 0; i < vi->max_queue_pairs; i++) + for (i = 0; i < vi->max_queue_pairs; i++) { netif_napi_del(&vi->rq[i].napi); + netif_napi_del(&vi->sq[i].napi); + } kfree(vi->rq); kfree(vi->sq); @@ -1593,6 +1634,8 @@ static int virtnet_alloc_queues(struct virtnet_info *vi) netif_napi_add(vi->dev, &vi->rq[i].napi, virtnet_poll, napi_weight); napi_hash_add(&vi->rq[i].napi); + netif_napi_add(vi->dev, &vi->sq[i].napi, virtnet_poll_tx, + napi_weight); s...
2014 Oct 14
4
[PATCH RFC] virtio_net: enable tx interrupt
...et_free_queues(struct virtnet_info *vi) { int i; - for (i = 0; i < vi->max_queue_pairs; i++) + for (i = 0; i < vi->max_queue_pairs; i++) { netif_napi_del(&vi->rq[i].napi); + netif_napi_del(&vi->sq[i].napi); + } kfree(vi->rq); kfree(vi->sq); @@ -1593,6 +1634,8 @@ static int virtnet_alloc_queues(struct virtnet_info *vi) netif_napi_add(vi->dev, &vi->rq[i].napi, virtnet_poll, napi_weight); napi_hash_add(&vi->rq[i].napi); + netif_napi_add(vi->dev, &vi->sq[i].napi, virtnet_poll_tx, + napi_weight); s...
2020 Aug 13
0
Accumulating CPU load from Xorg process with DRI3
...ng some CPU: > > > op - 01:30:49 up 2:45, 1 user, load average: 1,12, 0,93, 0,84 > Tasks: 210 total, 1 running, 209 sleeping, 0 stopped, 0 zombie > %Cpu(s): 12,1 us, 3,9 sy, 0,0 ni, 81,7 id, 0,7 wa, 0,0 hi, 1,6 si, 0,0 st > MiB Mem : 11875,3 total, 6416,4 free, 1634,8 used, 3824,1 buff/cache > MiB Swap: 1145,0 total, 1145,0 free, 0,0 used. 9969,7 avail Mem > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 1198 root 20 0 146160 78828 28160 S 35,8 0,6 30:41.37 Xorg > 1285 guest 20...
2014 Oct 15
0
[PATCH RFC] virtio_net: enable tx interrupt
...; > > - for (i = 0; i < vi->max_queue_pairs; i++) > + for (i = 0; i < vi->max_queue_pairs; i++) { > netif_napi_del(&vi->rq[i].napi); > + netif_napi_del(&vi->sq[i].napi); > + } > > kfree(vi->rq); > kfree(vi->sq); > @@ -1593,6 +1634,8 @@ static int virtnet_alloc_queues(struct virtnet_info *vi) > netif_napi_add(vi->dev, &vi->rq[i].napi, virtnet_poll, > napi_weight); > napi_hash_add(&vi->rq[i].napi); > + netif_napi_add(vi->dev, &vi->sq[i].napi, virtnet_poll_tx, > +...
2020 Aug 16
1
Accumulating CPU load from Xorg process with DRI3
...load average: 1,12, 0,93, 0,84 > > > > > Tasks: 210 total, 1 running, 209 sleeping, 0 stopped, 0 zombie > > > > > %Cpu(s): 12,1 us, 3,9 sy, 0,0 ni, 81,7 id, 0,7 wa, 0,0 hi, 1,6 si, 0,0 st > > > > > MiB Mem : 11875,3 total, 6416,4 free, 1634,8 used, 3824,1 buff/cache > > > > > MiB Swap: 1145,0 total, 1145,0 free, 0,0 used. 9969,7 avail Mem > > > > > > > > > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > > > > > 1198 root 20...
2019 Nov 22
0
[RFC 13/13] iommu/virtio: Add topology description to
...c +++ b/drivers/pci/pci-driver.c @@ -17,6 +17,7 @@ #include <linux/suspend.h> #include <linux/kexec.h> #include <linux/of_device.h> +#include <linux/virtio_iommu.h> #include <linux/acpi.h> #include "pci.h" #include "pcie/portdrv.h" @@ -1633,6 +1634,8 @@ static int pci_dma_configure(struct device *dev) struct acpi_device *adev = to_acpi_device_node(bridge->fwnode); ret = acpi_dma_configure(dev, acpi_get_dma_attr(adev)); + } else if (IS_ENABLED(CONFIG_VIRTIO_IOMMU_TOPOLOGY)) { + ret = virt_dma_configure(dev); } pci_put_host_br...
2019 Nov 22
1
[RFC 13/13] iommu/virtio: Add topology description to
...6 +17,7 @@ > #include <linux/suspend.h> > #include <linux/kexec.h> > #include <linux/of_device.h> > +#include <linux/virtio_iommu.h> > #include <linux/acpi.h> > #include "pci.h" > #include "pcie/portdrv.h" > @@ -1633,6 +1634,8 @@ static int pci_dma_configure(struct device *dev) > struct acpi_device *adev = to_acpi_device_node(bridge->fwnode); > > ret = acpi_dma_configure(dev, acpi_get_dma_attr(adev)); > + } else if (IS_ENABLED(CONFIG_VIRTIO_IOMMU_TOPOLOGY)) { > + ret = virt_dma_configure(dev)...
2019 Nov 22
16
[RFC 00/13] virtio-iommu on non-devicetree platforms
I'm seeking feedback on multi-platform support for virtio-iommu. At the moment only devicetree (DT) is supported and we don't have a pleasant solution for other platforms. Once we figure out the topology description, x86 support is trivial. Since the IOMMU manages memory accesses from other devices, the guest kernel needs to initialize the IOMMU before endpoints start issuing DMA.
2019 Nov 22
16
[RFC 00/13] virtio-iommu on non-devicetree platforms
I'm seeking feedback on multi-platform support for virtio-iommu. At the moment only devicetree (DT) is supported and we don't have a pleasant solution for other platforms. Once we figure out the topology description, x86 support is trivial. Since the IOMMU manages memory accesses from other devices, the guest kernel needs to initialize the IOMMU before endpoints start issuing DMA.
2007 Jan 31
0
Branch 'interpreter' - 20 commits - autogen.sh configure.ac libswfdec/js libswfdec/swfdec_debug.h libswfdec/swfdec_js.c libswfdec/swfdec_js_color.c libswfdec/swfdec_js_movie.c libswfdec/swfdec_movie.c libswfdec/swfdec_movie.h libswfdec/swfdec_script.c
...ecBi gconstpointer bytecode; bytecode = bits->ptr; - while ((action = swfdec_bits_get_u8 (bits))) { + while (swfdec_bits_left (bits) && (action = swfdec_bits_get_u8 (bits))) { if (action & 0x80) { len = swfdec_bits_get_u16 (bits); data = bits->ptr; @@ -1634,8 +1755,10 @@ swfdec_script_interpret (SwfdecScript *s while (TRUE) { /* check pc */ + if (pc == endpc) /* needed for scripts created via DefineFunction */ + break; if (pc < startpc || pc >= endpc) { - SWFDEC_ERROR ("pc %p not in valid range [%p, %p] anymore&q...
2016 Mar 21
22
[PATCH v2 00/18] Support non-lru page migration
Recently, I got many reports about perfermance degradation in embedded system(Android mobile phone, webOS TV and so on) and failed to fork easily. The problem was fragmentation caused by zram and GPU driver pages. Their pages cannot be migrated so compaction cannot work well, either so reclaimer ends up shrinking all of working set pages. It made system very slow and even to fail to fork easily.
2016 Mar 21
22
[PATCH v2 00/18] Support non-lru page migration
Recently, I got many reports about perfermance degradation in embedded system(Android mobile phone, webOS TV and so on) and failed to fork easily. The problem was fragmentation caused by zram and GPU driver pages. Their pages cannot be migrated so compaction cannot work well, either so reclaimer ends up shrinking all of working set pages. It made system very slow and even to fail to fork easily.
2007 Feb 06
0
109 commits - configure.ac libswfdec/js libswfdec/Makefile.am libswfdec/swfdec_bits.c libswfdec/swfdec_bits.h libswfdec/swfdec_buffer.c libswfdec/swfdec_button_movie.c libswfdec/swfdec_codec_screen.c libswfdec/swfdec_color.c libswfdec/swfdec_color.h
...ecBi gconstpointer bytecode; bytecode = bits->ptr; - while ((action = swfdec_bits_get_u8 (bits))) { + while (swfdec_bits_left (bits) && (action = swfdec_bits_get_u8 (bits))) { if (action & 0x80) { len = swfdec_bits_get_u16 (bits); data = bits->ptr; @@ -1634,8 +1755,10 @@ swfdec_script_interpret (SwfdecScript *s while (TRUE) { /* check pc */ + if (pc == endpc) /* needed for scripts created via DefineFunction */ + break; if (pc < startpc || pc >= endpc) { - SWFDEC_ERROR ("pc %p not in valid range [%p, %p] anymore&q...