Displaying 20 results from an estimated 4000 matches similar to: "Does ZFS use large memory pages?"
2012 Nov 03
2
Replacing NAs in long format
Hi,
I have the following data:
> data[1:20,c(1,2,20)]
idr schyear year
1 8 0
1 9 1
1 10 NA
2 4 NA
2 5 -1
2 6 0
2 7 1
2 8 2
2 9 3
2 10 4
2 11 NA
2 12 6
3 4 NA
3 5 -2
3 6 -1
3 7 0
3 8 1
3 9 2
3 10 3
3 11 NA
What I want to do is
2017 Nov 30
2
[PATCH v18 01/10] idr: add #include <linux/bug.h>
On Wed, Nov 29, 2017 at 09:55:17PM +0800, Wei Wang wrote:
> The <linux/bug.h> was removed from radix-tree.h by the following commit:
> f5bba9d11a256ad2a1c2f8e7fc6aabe6416b7890.
>
> Since that commit, tools/testing/radix-tree/ couldn't pass compilation
> due to: tools/testing/radix-tree/idr.c:17: undefined reference to
> WARN_ON_ONCE. This patch adds the bug.h header to
2017 Nov 30
2
[PATCH v18 01/10] idr: add #include <linux/bug.h>
On Wed, Nov 29, 2017 at 09:55:17PM +0800, Wei Wang wrote:
> The <linux/bug.h> was removed from radix-tree.h by the following commit:
> f5bba9d11a256ad2a1c2f8e7fc6aabe6416b7890.
>
> Since that commit, tools/testing/radix-tree/ couldn't pass compilation
> due to: tools/testing/radix-tree/idr.c:17: undefined reference to
> WARN_ON_ONCE. This patch adds the bug.h header to
2020 Feb 18
2
[PATCH] vhost: introduce vDPA based backend
On Fri, Jan 31, 2020 at 11:36:51AM +0800, Tiwei Bie wrote:
> +static int vhost_vdpa_alloc_minor(struct vhost_vdpa *v)
> +{
> + return idr_alloc(&vhost_vdpa.idr, v, 0, MINORMASK + 1,
> + GFP_KERNEL);
> +}
Please don't use idr in new code, use xarray directly
> +static int vhost_vdpa_probe(struct device *dev)
> +{
> + struct vdpa_device *vdpa = dev_to_vdpa(dev);
2020 Feb 18
2
[PATCH] vhost: introduce vDPA based backend
On Fri, Jan 31, 2020 at 11:36:51AM +0800, Tiwei Bie wrote:
> +static int vhost_vdpa_alloc_minor(struct vhost_vdpa *v)
> +{
> + return idr_alloc(&vhost_vdpa.idr, v, 0, MINORMASK + 1,
> + GFP_KERNEL);
> +}
Please don't use idr in new code, use xarray directly
> +static int vhost_vdpa_probe(struct device *dev)
> +{
> + struct vdpa_device *vdpa = dev_to_vdpa(dev);
2013 Aug 20
5
[PATCH-v3 1/4] idr: Percpu ida
On Fri, 16 Aug 2013 23:09:06 +0000 "Nicholas A. Bellinger" <nab at linux-iscsi.org> wrote:
> From: Kent Overstreet <kmo at daterainc.com>
>
> Percpu frontend for allocating ids. With percpu allocation (that works),
> it's impossible to guarantee it will always be possible to allocate all
> nr_tags - typically, some will be stuck on a remote percpu
2013 Aug 20
5
[PATCH-v3 1/4] idr: Percpu ida
On Fri, 16 Aug 2013 23:09:06 +0000 "Nicholas A. Bellinger" <nab at linux-iscsi.org> wrote:
> From: Kent Overstreet <kmo at daterainc.com>
>
> Percpu frontend for allocating ids. With percpu allocation (that works),
> it's impossible to guarantee it will always be possible to allocate all
> nr_tags - typically, some will be stuck on a remote percpu
2017 Dec 22
2
[PATCH v20 3/7 RESEND] xbitmap: add more operations
On 12/22/2017 05:03 AM, Matthew Wilcox wrote:
> OK, here's a rewrite of xbitmap.
>
> Compared to the version you sent:
> - xb_find_set() is the rewrite I sent out yesterday.
> - xb_find_clear() is a new implementation. I use the IDR_FREE tag to find
> clear bits. This led to me finding a bug in radix_tree_for_each_tagged().
> - xb_zero() is also a new
2017 Dec 22
2
[PATCH v20 3/7 RESEND] xbitmap: add more operations
On 12/22/2017 05:03 AM, Matthew Wilcox wrote:
> OK, here's a rewrite of xbitmap.
>
> Compared to the version you sent:
> - xb_find_set() is the rewrite I sent out yesterday.
> - xb_find_clear() is a new implementation. I use the IDR_FREE tag to find
> clear bits. This led to me finding a bug in radix_tree_for_each_tagged().
> - xb_zero() is also a new
2007 Apr 19
3
Using dtrace to snoop messages between two Streams modules
I''m working on a case where customer has a 3rd party streams
driver/module, called uplink, which sits over Sun''s ce driver. This 3rd
party module is used by the telco to perform telco grade NIC failover.
The customer was given an IDR ce driver, to avoid a panic they were
given. The IDR driver was successful in avoiding the panic, but now the
customer is getting many
2013 Aug 16
6
[PATCH-v3 0/4] target/vhost-scsi: Add per-cpu ida tag pre-allocation for v3.12
From: Nicholas Bellinger <nab at linux-iscsi.org>
Hi folks,
This is an updated series for adding tag pre-allocation support of
target fabric descriptor memory, utilizing Kent's latest per-cpu ida
bits here, along with Christoph Lameter's latest comments:
[PATCH 04/10] idr: Percpu ida
http://marc.info/?l=linux-kernel&m=137160026006974&w=2
The first patch is a
2013 Aug 16
6
[PATCH-v3 0/4] target/vhost-scsi: Add per-cpu ida tag pre-allocation for v3.12
From: Nicholas Bellinger <nab at linux-iscsi.org>
Hi folks,
This is an updated series for adding tag pre-allocation support of
target fabric descriptor memory, utilizing Kent's latest per-cpu ida
bits here, along with Christoph Lameter's latest comments:
[PATCH 04/10] idr: Percpu ida
http://marc.info/?l=linux-kernel&m=137160026006974&w=2
The first patch is a
2018 May 16
2
[RFC v4 3/5] virtio_ring: add packed ring support
On 2018?05?16? 20:39, Tiwei Bie wrote:
> On Wed, May 16, 2018 at 07:50:16PM +0800, Jason Wang wrote:
>> On 2018?05?16? 16:37, Tiwei Bie wrote:
> [...]
>>> struct vring_virtqueue {
>>> @@ -116,6 +117,9 @@ struct vring_virtqueue {
>>> /* Last written value to driver->flags in
>>> * guest byte order. */
>>> u16
2018 May 16
2
[RFC v4 3/5] virtio_ring: add packed ring support
On 2018?05?16? 20:39, Tiwei Bie wrote:
> On Wed, May 16, 2018 at 07:50:16PM +0800, Jason Wang wrote:
>> On 2018?05?16? 16:37, Tiwei Bie wrote:
> [...]
>>> struct vring_virtqueue {
>>> @@ -116,6 +117,9 @@ struct vring_virtqueue {
>>> /* Last written value to driver->flags in
>>> * guest byte order. */
>>> u16
2023 Feb 14
3
[PATCH] drm/gem: Expose the buffer object handle to userspace last
From: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
Currently drm_gem_handle_create_tail exposes the handle to userspace
before the buffer object constructions is complete. This allowing
of working against a partially constructed object, which may also be in
the process of having its creation fail, can have a range of negative
outcomes.
A lot of those will depend on what the individual
2023 Feb 14
3
[PATCH] drm/gem: Expose the buffer object handle to userspace last
From: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
Currently drm_gem_handle_create_tail exposes the handle to userspace
before the buffer object constructions is complete. This allowing
of working against a partially constructed object, which may also be in
the process of having its creation fail, can have a range of negative
outcomes.
A lot of those will depend on what the individual
2015 Sep 17
2
DEFINE_IDA causing memory leaks? (was Re: [PATCH 1/2] virtio: fix memory leak of virtio ida cache layers)
On Wed, Sep 16, 2015 at 07:29:17PM -0500, Suman Anna wrote:
> The virtio core uses a static ida named virtio_index_ida for
> assigning index numbers to virtio devices during registration.
> The ida core may allocate some internal idr cache layers and
> an ida bitmap upon any ida allocation, and all these layers are
> truely freed only upon the ida destruction. The virtio_index_ida
2015 Sep 17
2
DEFINE_IDA causing memory leaks? (was Re: [PATCH 1/2] virtio: fix memory leak of virtio ida cache layers)
On Wed, Sep 16, 2015 at 07:29:17PM -0500, Suman Anna wrote:
> The virtio core uses a static ida named virtio_index_ida for
> assigning index numbers to virtio devices during registration.
> The ida core may allocate some internal idr cache layers and
> an ida bitmap upon any ida allocation, and all these layers are
> truely freed only upon the ida destruction. The virtio_index_ida
2013 Aug 28
2
[PATCH] percpu ida: Switch to cpumask_t, add some comments
On Wed, 28 Aug 2013 12:55:17 -0700 Kent Overstreet <kmo at daterainc.com> wrote:
> Fixup patch, addressing Andrew's review feedback:
Looks reasonable.
> lib/idr.c | 38 +++++++++++++++++++++-----------------
I still don't think it should be in this file.
You say that some as-yet-unmerged patches will tie the new code into
the old ida code. But will it do it in a
2020 Sep 07
1
[PATCH v2 1/2] drm: allow limiting the scatter list size.
> > + /**
> > + * @max_segment:
> > + *
> > + * Max size for scatter list segments. When unset the default
> > + * (SCATTERLIST_MAX_SEGMENT) is used.
> > + */
> > + size_t max_segment;
>
> Is there no better place for this then "at the bottom"? drm_device is a
> huge structure, piling stuff up randomly doesn't make it better