search for: range_ends

Displaying 20 results from an estimated 25 matches for "range_ends".

Did you mean: range_end
2008 Jul 09
1
memory leak in sub("[range]",...)
...by 0x8160DA7: do_begin (eval.c:1174) ==32503== by 0x815F0EB: Rf_eval (eval.c:461) ==32503== by 0x8162210: Rf_applyClosure (eval.c:667) The leaked blocks are allocated in iinternal_function build_range_exp() at 5200 /* Use realloc since mbcset->range_starts and mbcset->range_ends 5201 are NULL if *range_alloc == 0. */ 5202 new_array_start = re_realloc (mbcset->range_starts, wchar_t, 5203 new_nranges); 5204 new_array_end = re_realloc (mbcset->range_ends, wchar_t, 5205...
2008 Aug 07
1
memory leak in sub("[range]", ...) when #ifndef _LIBC (PR#11946)
...tes in 0 blocks. ==28643== still reachable: 12,599,585 bytes in 5,915 blocks. ==28643== suppressed: 0 bytes in 0 blocks. ==28643== Reachable blocks (those to which a pointer was found) are not shown. ==28643== To see them, rerun with: --show-reachable=yes The flagged memory block is the range_ends component of mbcset. I think that range_starts was also being leaked, but valgrind was combining the two. It looks like the cpp macro _LIBC is not defined when I compile R in this Linux box. regex.c defines range_ends and range_starts as different types, depending on the value of _LIBC, and it a...
2006 Mar 30
2
Functional test confusion
I have been reading about testing in RoR for what seems like hours now in search of the answer to what is probably a simple problem. Where is the best place to test the second action explained below? I have a controller (NetworkingController) with a method (create_network_segment) that makes use of two models (NetworkSegment and NetworkIpaddress). This particular method does two actions: 1.
2008 Aug 07
0
memory leak in sub("[range]", ...) when #ifndef _LIBC (PR#12488)
...== still reachable: 12,599,585 bytes in 5,915 blocks. > ==28643== suppressed: 0 bytes in 0 blocks. > ==28643== Reachable blocks (those to which a pointer was found) are not shown. > ==28643== To see them, rerun with: --show-reachable=yes > > The flagged memory block is the range_ends component of mbcset. > I think that range_starts was also being leaked, but valgrind was > combining the two. > > It looks like the cpp macro _LIBC is not defined when I compile > R in this Linux box. regex.c defines range_ends and range_starts > as different types, depending on...
2019 Jul 24
2
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
On Wed, Jul 24, 2019 at 07:58:58PM +0200, Michal Hocko wrote: > On Wed 24-07-19 12:28:58, Jason Gunthorpe wrote: > > On Wed, Jul 24, 2019 at 09:05:53AM +0200, Christoph Hellwig wrote: > > > Looks good: > > > > > > Reviewed-by: Christoph Hellwig <hch at lst.de> > > > > > > One comment on a related cleanup: > > > > > >
2019 Jul 24
5
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
On Wed, Jul 24, 2019 at 09:05:53AM +0200, Christoph Hellwig wrote: > Looks good: > > Reviewed-by: Christoph Hellwig <hch at lst.de> > > One comment on a related cleanup: > > > list_for_each_entry(mirror, &hmm->mirrors, list) { > > int rc; > > > > - rc = mirror->ops->sync_cpu_device_pagetables(mirror, &update); > > +
2019 Jul 24
2
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
On Wed 24-07-19 20:56:17, Michal Hocko wrote: > On Wed 24-07-19 15:08:37, Jason Gunthorpe wrote: > > On Wed, Jul 24, 2019 at 07:58:58PM +0200, Michal Hocko wrote: > [...] > > > Maybe new users have started relying on a new semantic in the meantime, > > > back then, none of the notifier has even started any action in blocking > > > mode on a EAGAIN bailout.
2019 Mar 08
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > > On 2019/3/8 ??3:16, Andrea Arcangeli wrote: > > On Thu, Mar 07, 2019 at 12:56:45PM -0500, Michael S. Tsirkin wrote: > > > On Thu, Mar 07, 2019 at 10:47:22AM -0500, Michael S. Tsirkin wrote: > > > > On Wed, Mar 06, 2019 at 02:18:12AM -0500, Jason Wang wrote: > > > > > +static const
2019 Mar 08
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: > > On 2019/3/8 ??3:16, Andrea Arcangeli wrote: > > On Thu, Mar 07, 2019 at 12:56:45PM -0500, Michael S. Tsirkin wrote: > > > On Thu, Mar 07, 2019 at 10:47:22AM -0500, Michael S. Tsirkin wrote: > > > > On Wed, Mar 06, 2019 at 02:18:12AM -0500, Jason Wang wrote: > > > > > +static const
2019 Jul 23
4
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
The hmm_mirror_ops callback function sync_cpu_device_pagetables() passes a struct hmm_update which is a simplified version of struct mmu_notifier_range. This is unnecessary so replace hmm_update with mmu_notifier_range directly. Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> Cc: "Jérôme Glisse" <jglisse at redhat.com> Cc: Jason Gunthorpe <jgg at mellanox.com>
2019 Jul 24
0
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
On Wed 24-07-19 12:28:58, Jason Gunthorpe wrote: > On Wed, Jul 24, 2019 at 09:05:53AM +0200, Christoph Hellwig wrote: > > Looks good: > > > > Reviewed-by: Christoph Hellwig <hch at lst.de> > > > > One comment on a related cleanup: > > > > > list_for_each_entry(mirror, &hmm->mirrors, list) { > > > int rc; > > >
2019 Jul 24
0
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
On Wed 24-07-19 15:08:37, Jason Gunthorpe wrote: > On Wed, Jul 24, 2019 at 07:58:58PM +0200, Michal Hocko wrote: [...] > > Maybe new users have started relying on a new semantic in the meantime, > > back then, none of the notifier has even started any action in blocking > > mode on a EAGAIN bailout. Most of them simply did trylock early in the > > process and bailed out
2019 Jul 24
0
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
On Wed, Jul 24, 2019 at 08:59:10PM +0200, Michal Hocko wrote: > On Wed 24-07-19 20:56:17, Michal Hocko wrote: > > On Wed 24-07-19 15:08:37, Jason Gunthorpe wrote: > > > On Wed, Jul 24, 2019 at 07:58:58PM +0200, Michal Hocko wrote: > > [...] > > > > Maybe new users have started relying on a new semantic in the meantime, > > > > back then, none of the
2019 Mar 11
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On 2019/3/8 ??10:58, Jerome Glisse wrote: > On Fri, Mar 08, 2019 at 04:50:36PM +0800, Jason Wang wrote: >> On 2019/3/8 ??3:16, Andrea Arcangeli wrote: >>> On Thu, Mar 07, 2019 at 12:56:45PM -0500, Michael S. Tsirkin wrote: >>>> On Thu, Mar 07, 2019 at 10:47:22AM -0500, Michael S. Tsirkin wrote: >>>>> On Wed, Mar 06, 2019 at 02:18:12AM -0500, Jason Wang
2023 Mar 06
0
[PATCH drm-next v2 05/16] drm: manager to keep track of GPUs VA mappings
On 2/28/23 17:24, Liam R. Howlett wrote: > * Danilo Krummrich <dakr at redhat.com> [230227 21:17]: >> On Tue, Feb 21, 2023 at 01:20:50PM -0500, Liam R. Howlett wrote: >>> * Danilo Krummrich <dakr at redhat.com> [230217 08:45]: >>>> Add infrastructure to keep track of GPU virtual address (VA) mappings >>>> with a decicated VA space manager
2020 Sep 03
1
[PATCH v3] mm/thp: fix __split_huge_pmd_locked() for migration PMD
A migrating transparent huge page has to already be unmapped. Otherwise, the page could be modified while it is being copied to a new page and data could be lost. The function __split_huge_pmd() checks for a PMD migration entry before calling __split_huge_pmd_locked() leading one to think that __split_huge_pmd_locked() can handle splitting a migrating PMD. However, the code always increments the
2019 Mar 08
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On 2019/3/8 ??3:16, Andrea Arcangeli wrote: > On Thu, Mar 07, 2019 at 12:56:45PM -0500, Michael S. Tsirkin wrote: >> On Thu, Mar 07, 2019 at 10:47:22AM -0500, Michael S. Tsirkin wrote: >>> On Wed, Mar 06, 2019 at 02:18:12AM -0500, Jason Wang wrote: >>>> +static const struct mmu_notifier_ops vhost_mmu_notifier_ops = { >>>> + .invalidate_range =
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 12:56:45PM -0500, Michael S. Tsirkin wrote: > On Thu, Mar 07, 2019 at 10:47:22AM -0500, Michael S. Tsirkin wrote: > > On Wed, Mar 06, 2019 at 02:18:12AM -0500, Jason Wang wrote: > > > +static const struct mmu_notifier_ops vhost_mmu_notifier_ops = { > > > + .invalidate_range = vhost_invalidate_range, > > > +}; > > > + > >
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 12:56:45PM -0500, Michael S. Tsirkin wrote: > On Thu, Mar 07, 2019 at 10:47:22AM -0500, Michael S. Tsirkin wrote: > > On Wed, Mar 06, 2019 at 02:18:12AM -0500, Jason Wang wrote: > > > +static const struct mmu_notifier_ops vhost_mmu_notifier_ops = { > > > + .invalidate_range = vhost_invalidate_range, > > > +}; > > > + > >
2012 Jun 26
6
[PATCH] Add a page cache-backed balloon device driver.
This implementation of a virtio balloon driver uses the page cache to "store" pages that have been released to the host. The communication (outside of target counts) is one way--the guest notifies the host when it adds a page to the page cache, allowing the host to madvise(2) with MADV_DONTNEED. Reclaim in the guest is therefore automatic and implicit (via the regular page reclaim).