Displaying 20 results from an estimated 25 matches for "range_end".
2008 Jul 09
1
memory leak in sub("[range]",...)
...by 0x8160DA7: do_begin (eval.c:1174)
==32503== by 0x815F0EB: Rf_eval (eval.c:461)
==32503== by 0x8162210: Rf_applyClosure (eval.c:667)
The leaked blocks are allocated in iinternal_function build_range_exp() at
5200 /* Use realloc since mbcset->range_starts and
mbcset->range_ends
5201 are NULL if *range_alloc == 0. */
5202 new_array_start = re_realloc (mbcset->range_starts,
wchar_t,
5203 new_nranges);
5204 new_array_end = re_realloc (mbcset->range_ends, wchar_t,
5205...
2008 Aug 07
1
memory leak in sub("[range]", ...) when #ifndef _LIBC (PR#11946)
...tes in 0 blocks.
==28643== still reachable: 12,599,585 bytes in 5,915 blocks.
==28643== suppressed: 0 bytes in 0 blocks.
==28643== Reachable blocks (those to which a pointer was found) are not shown.
==28643== To see them, rerun with: --show-reachable=yes
The flagged memory block is the range_ends component of mbcset.
I think that range_starts was also being leaked, but valgrind was
combining the two.
It looks like the cpp macro _LIBC is not defined when I compile
R in this Linux box. regex.c defines range_ends and range_starts
as different types, depending on the value of _LIBC, and it...
2006 Mar 30
2
Functional test confusion
...> ''255'',
:octet3 => ''254'',
:octet4 => ''0'' },
:range_start => ''10'',
:range_end => ''100''}
assert_response :success
puts NetworkSegment.find(:all).size
end
It seems that creating a new network_segment in my test is not working.
Yet, I am not sure why.
$ ruby test/functional/networking_controller_test.rb
Loaded suite networking_controller_test
S...
2008 Aug 07
0
memory leak in sub("[range]", ...) when #ifndef _LIBC (PR#12488)
...== still reachable: 12,599,585 bytes in 5,915 blocks.
> ==28643== suppressed: 0 bytes in 0 blocks.
> ==28643== Reachable blocks (those to which a pointer was found) are not shown.
> ==28643== To see them, rerun with: --show-reachable=yes
>
> The flagged memory block is the range_ends component of mbcset.
> I think that range_starts was also being leaked, but valgrind was
> combining the two.
>
> It looks like the cpp macro _LIBC is not defined when I compile
> R in this Linux box. regex.c defines range_ends and range_starts
> as different types, depending on...
2019 Jul 24
2
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
..._notifier_invalidate_range_start_nonblock(&range)) {
> > tlb_finish_mmu(&tlb, range.start, range.end);
> > ret = false;
> > continue;
> > }
> > unmap_page_range(&tlb, vma, range.start, range.end, NULL);
> > mmu_notifier_invalidate_range_end(&range);
> >
> > Which looks like it creates an unbalanced start/end pairing if any
> > start returns EAGAIN?
> >
> > This does not seem OK.. Many users require start/end to be paired to
> > keep track of their internal locking. Ie for instance hmm breaks
&...
2019 Jul 24
5
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
...is is a
problem:
I see in __oom_reap_task_mm():
if (mmu_notifier_invalidate_range_start_nonblock(&range)) {
tlb_finish_mmu(&tlb, range.start, range.end);
ret = false;
continue;
}
unmap_page_range(&tlb, vma, range.start, range.end, NULL);
mmu_notifier_invalidate_range_end(&range);
Which looks like it creates an unbalanced start/end pairing if any
start returns EAGAIN?
This does not seem OK.. Many users require start/end to be paired to
keep track of their internal locking. Ie for instance hmm breaks
because the hmm->notifiers counter becomes unable to get t...
2019 Jul 24
2
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
...ve started relying on a new semantic in the meantime,
> > > back then, none of the notifier has even started any action in blocking
> > > mode on a EAGAIN bailout. Most of them simply did trylock early in the
> > > process and bailed out so there was nothing to do for the range_end
> > > callback.
> >
> > Single notifiers are not the problem. I tried to make this clear in
> > the commit message, but lets be more explicit.
> >
> > We have *two* notifiers registered to the mm, A and B:
> >
> > A invalidate_range_start: (has...
2019 Mar 08
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...r here: when page is write protected then
> > > > it does not look like .invalidate_range is invoked.
> > > >
> > > > E.g. mm/ksm.c calls
> > > >
> > > > mmu_notifier_invalidate_range_start and
> > > > mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
> > > >
> > > > Similarly, rmap in page_mkclean_one will not call
> > > > mmu_notifier_invalidate_range.
> > > >
> > > > If I'm right vhost won't get notified when page is write-protected si...
2019 Mar 08
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...r here: when page is write protected then
> > > > it does not look like .invalidate_range is invoked.
> > > >
> > > > E.g. mm/ksm.c calls
> > > >
> > > > mmu_notifier_invalidate_range_start and
> > > > mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
> > > >
> > > > Similarly, rmap in page_mkclean_one will not call
> > > > mmu_notifier_invalidate_range.
> > > >
> > > > If I'm right vhost won't get notified when page is write-protected si...
2019 Jul 23
4
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
The hmm_mirror_ops callback function sync_cpu_device_pagetables() passes
a struct hmm_update which is a simplified version of struct
mmu_notifier_range. This is unnecessary so replace hmm_update with
mmu_notifier_range directly.
Signed-off-by: Ralph Campbell <rcampbell at nvidia.com>
Cc: "Jérôme Glisse" <jglisse at redhat.com>
Cc: Jason Gunthorpe <jgg at mellanox.com>
2019 Jul 24
0
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
...sk_mm():
>
> if (mmu_notifier_invalidate_range_start_nonblock(&range)) {
> tlb_finish_mmu(&tlb, range.start, range.end);
> ret = false;
> continue;
> }
> unmap_page_range(&tlb, vma, range.start, range.end, NULL);
> mmu_notifier_invalidate_range_end(&range);
>
> Which looks like it creates an unbalanced start/end pairing if any
> start returns EAGAIN?
>
> This does not seem OK.. Many users require start/end to be paired to
> keep track of their internal locking. Ie for instance hmm breaks
> because the hmm->notifi...
2019 Jul 24
0
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
...be new users have started relying on a new semantic in the meantime,
> > back then, none of the notifier has even started any action in blocking
> > mode on a EAGAIN bailout. Most of them simply did trylock early in the
> > process and bailed out so there was nothing to do for the range_end
> > callback.
>
> Single notifiers are not the problem. I tried to make this clear in
> the commit message, but lets be more explicit.
>
> We have *two* notifiers registered to the mm, A and B:
>
> A invalidate_range_start: (has no blocking)
> spin_lock()
>...
2019 Jul 24
0
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
...ing on a new semantic in the meantime,
> > > > back then, none of the notifier has even started any action in blocking
> > > > mode on a EAGAIN bailout. Most of them simply did trylock early in the
> > > > process and bailed out so there was nothing to do for the range_end
> > > > callback.
> > >
> > > Single notifiers are not the problem. I tried to make this clear in
> > > the commit message, but lets be more explicit.
> > >
> > > We have *two* notifiers registered to the mm, A and B:
> > >
> &...
2019 Mar 11
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...re: when page is write protected then
>>>>> it does not look like .invalidate_range is invoked.
>>>>>
>>>>> E.g. mm/ksm.c calls
>>>>>
>>>>> mmu_notifier_invalidate_range_start and
>>>>> mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
>>>>>
>>>>> Similarly, rmap in page_mkclean_one will not call
>>>>> mmu_notifier_invalidate_range.
>>>>>
>>>>> If I'm right vhost won't get notified when page is write-protected...
2023 Mar 06
0
[PATCH drm-next v2 05/16] drm: manager to keep track of GPUs VA mappings
...I just can't spot though.
>
> This is probably my fault in how I explained things, I seem to have had
> a bug in my code.
>
> Let me try again.
>
> mas_walk(&mas) will go to the range of mas.index
> It will set mas.index = range_start
> It will set mas.last = range_end
> It will return entry in that range.
>
> Your code is walking to addr (0xc0000, say)
> You get NULL
> and the range is now: mas.index = 0, mas.last = ULONG_MAX
>
> You set mas.last = 0xc0000 + 0x40000 -1
> You store your va in the range of 0 - 0xfffff - This isn't wh...
2020 Sep 03
1
[PATCH v3] mm/thp: fix __split_huge_pmd_locked() for migration PMD
A migrating transparent huge page has to already be unmapped. Otherwise,
the page could be modified while it is being copied to a new page and
data could be lost. The function __split_huge_pmd() checks for a PMD
migration entry before calling __split_huge_pmd_locked() leading one to
think that __split_huge_pmd_locked() can handle splitting a migrating PMD.
However, the code always increments the
2019 Mar 08
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...t;>>> {
>>> I also wonder here: when page is write protected then
>>> it does not look like .invalidate_range is invoked.
>>>
>>> E.g. mm/ksm.c calls
>>>
>>> mmu_notifier_invalidate_range_start and
>>> mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
>>>
>>> Similarly, rmap in page_mkclean_one will not call
>>> mmu_notifier_invalidate_range.
>>>
>>> If I'm right vhost won't get notified when page is write-protected since you
>>> didn't insta...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...v_limit)
> > > {
> >
> > I also wonder here: when page is write protected then
> > it does not look like .invalidate_range is invoked.
> >
> > E.g. mm/ksm.c calls
> >
> > mmu_notifier_invalidate_range_start and
> > mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
> >
> > Similarly, rmap in page_mkclean_one will not call
> > mmu_notifier_invalidate_range.
> >
> > If I'm right vhost won't get notified when page is write-protected since you
> > didn't install start/end not...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...v_limit)
> > > {
> >
> > I also wonder here: when page is write protected then
> > it does not look like .invalidate_range is invoked.
> >
> > E.g. mm/ksm.c calls
> >
> > mmu_notifier_invalidate_range_start and
> > mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
> >
> > Similarly, rmap in page_mkclean_one will not call
> > mmu_notifier_invalidate_range.
> >
> > If I'm right vhost won't get notified when page is write-protected since you
> > didn't install start/end not...
2012 Jun 26
6
[PATCH] Add a page cache-backed balloon device driver.
...However, we still suggest syncing the
+ * diff so that we can get within the target range.
+ */
+ s64 nr_to_write =
+ (!config_pages(vb) ? LONG_MAX : -diff);
+ struct writeback_control wbc = {
+ .sync_mode = WB_SYNC_ALL,
+ .nr_to_write = nr_to_write,
+ .range_start = 0,
+ .range_end = LLONG_MAX,
+ };
+ sync_inode(&the_inode.inode, &wbc);
+ }
+ update_balloon_size(vb);
+ }
+ return 0;
+}
+
+static ssize_t virtballoon_attr_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf);
+
+static DEVICE_ATTR(total_memory, 0644,
+ virtballoon_...