Displaying 20 results from an estimated 46 matches for "page_mapcount".
2009 Mar 26
3
Install Zimbra on a Xen DomU
Hello,
I''ve got a big problem when I try to install Zimbra on a xen debian
etch vm.
Eeek! page_mapcount(page) went negative! (-1)
page pfn = 7141
page->flags = 4000083c
page->count = 2
page->mapping = cd946510
vma->vm_ops = 0x0
------------[ cut here ]------------
kernel BUG at mm/rmap.c:669!
invalid opcode: 0000 [#1] SMP
Pid: 7717, comm: java Not tainted (2.6.25.7 #1)
EIP: 006...
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
...gt;> int flush_needed = 1;
>>> + bool is_anon = false;
>>>
>>> if (pmd_present(orig_pmd)) {
>>> page = pmd_page(orig_pmd);
>>> + is_anon = PageAnon(page);
>>> page_remove_rmap(page, true);
>>> VM_BUG_ON_PAGE(page_mapcount(page) < 0, page);
>>> VM_BUG_ON_PAGE(!PageHead(page), page);
>>> } else if (thp_migration_supported()) {
>>> swp_entry_t entry;
>>>
>>> - VM_BUG_ON(!is_pmd_migration_entry(orig_pmd));
>>> entry = pmd_to_swp_entry(orig_pmd...
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
...ol is_anon = false;
>>>>>
>>>>> if (pmd_present(orig_pmd)) {
>>>>> page = pmd_page(orig_pmd);
>>>>> + is_anon = PageAnon(page);
>>>>> page_remove_rmap(page, true);
>>>>> VM_BUG_ON_PAGE(page_mapcount(page) < 0, page);
>>>>> VM_BUG_ON_PAGE(!PageHead(page), page);
>>>>> } else if (thp_migration_supported()) {
>>>>> swp_entry_t entry;
>>>>>
>>>>> - VM_BUG_ON(!is_pmd_migration_entry(orig_pmd));
>>...
2020 Jun 21
2
[PATCH 13/16] mm: support THP migration to device private memory
..._struct *vma,
> } else {
> struct page *page = NULL;
> int flush_needed = 1;
> + bool is_anon = false;
>
> if (pmd_present(orig_pmd)) {
> page = pmd_page(orig_pmd);
> + is_anon = PageAnon(page);
> page_remove_rmap(page, true);
> VM_BUG_ON_PAGE(page_mapcount(page) < 0, page);
> VM_BUG_ON_PAGE(!PageHead(page), page);
> } else if (thp_migration_supported()) {
> swp_entry_t entry;
>
> - VM_BUG_ON(!is_pmd_migration_entry(orig_pmd));
> entry = pmd_to_swp_entry(orig_pmd);
> - page = pfn_to_page(swp_offset(entry));
&...
2020 Jun 22
0
[PATCH 13/16] mm: support THP migration to device private memory
...struct page *page = NULL;
>> int flush_needed = 1;
>> + bool is_anon = false;
>>
>> if (pmd_present(orig_pmd)) {
>> page = pmd_page(orig_pmd);
>> + is_anon = PageAnon(page);
>> page_remove_rmap(page, true);
>> VM_BUG_ON_PAGE(page_mapcount(page) < 0, page);
>> VM_BUG_ON_PAGE(!PageHead(page), page);
>> } else if (thp_migration_supported()) {
>> swp_entry_t entry;
>>
>> - VM_BUG_ON(!is_pmd_migration_entry(orig_pmd));
>> entry = pmd_to_swp_entry(orig_pmd);
>> - page = p...
2015 Aug 24
1
abrt-watch-log -F BUG: WARNING: at WARNING: CPU: INFO: possible recursive locking detected
...0:00 /usr/bin/abrt-watch-log -F
BUG: WARNING: at WARNING: CPU: INFO: possible recursive locking detected
ernel BUG at list_del corruption list_add corruption do_IRQ: stack
overflow: ear stack overflow (cur: eneral protection fault nable to
handle kernel ouble fault: RTNL: assertion failed eek!
page_mapcount(page) went negative! adness at NETDEV WATCHDOG ysctl table
check failed : nobody cared IRQ handler type mismatch Machine Check
Exception: Machine check events logged divide error: bounds: coprocessor
segment overrun: invalid TSS: segment not present: invalid opcode:
alignment check: stack segme...
2020 Jun 22
0
[PATCH 13/16] mm: support THP migration to device private memory
...;
>>>> + bool is_anon = false;
>>>>
>>>> if (pmd_present(orig_pmd)) {
>>>> page = pmd_page(orig_pmd);
>>>> + is_anon = PageAnon(page);
>>>> page_remove_rmap(page, true);
>>>> VM_BUG_ON_PAGE(page_mapcount(page) < 0, page);
>>>> VM_BUG_ON_PAGE(!PageHead(page), page);
>>>> } else if (thp_migration_supported()) {
>>>> swp_entry_t entry;
>>>>
>>>> - VM_BUG_ON(!is_pmd_migration_entry(orig_pmd));
>>>> entry...
2020 May 06
2
du hung, wild display in ps
...00:00:01 /usr/bin/abrt-watch-log -F BUG: WARNING: at WARNING: CPU: INFO: possible recursive locking detected ernel BUG at list_del corruption list_add corruption do_IRQ: stack overflow: ear stack overflow (cur: eneral protection fault nable to handle kernel ouble fault: RTNL: assertion failed eek! page_mapcount(page) went negative! adness at NETDEV WATCHDOG ysctl table check failed : nobody cared IRQ handler type mismatch Kernel panic - not syncing: Machine Check Exception: Machine check events logged divide error: bounds: coprocessor segment overrun: invalid TSS: segment not present: invalid opcode: alig...
2020 Jun 22
0
[PATCH 13/16] mm: support THP migration to device private memory
...{
> >>>>> page = pmd_page(orig_pmd);
> >>>>> + is_anon = PageAnon(page);
> >>>>> page_remove_rmap(page, true);
> >>>>> VM_BUG_ON_PAGE(page_mapcount(page) < 0, page);
> >>>>> VM_BUG_ON_PAGE(!PageHead(page), page);
> >>>>> } else if (thp_migration_supported()) {
> >>>>> swp_entry_t entry;
> >>>>>
> &...
2019 Mar 08
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On 2019/3/8 ??5:27, Andrea Arcangeli wrote:
> Hello Jerome,
>
> On Thu, Mar 07, 2019 at 03:17:22PM -0500, Jerome Glisse wrote:
>> So for the above the easiest thing is to call set_page_dirty() from
>> the mmu notifier callback. It is always safe to use the non locking
>> variant from such callback. Well it is safe only if the page was
>> map with write permission
2019 Mar 08
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On 2019/3/8 ??5:27, Andrea Arcangeli wrote:
> Hello Jerome,
>
> On Thu, Mar 07, 2019 at 03:17:22PM -0500, Jerome Glisse wrote:
>> So for the above the easiest thing is to call set_page_dirty() from
>> the mmu notifier callback. It is always safe to use the non locking
>> variant from such callback. Well it is safe only if the page was
>> map with write permission
2019 Mar 08
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...il ->invalidate_range is called) to avoid false positive gup
pin checks in things like KSM, or the pin must be released in
invalidate_range_start (which is called before the pin checks).
Here's why:
/*
* Check that no O_DIRECT or similar I/O is in progress on the
* page
*/
if (page_mapcount(page) + 1 + swapped != page_count(page)) {
set_pte_at(mm, pvmw.address, pvmw.pte, entry);
goto out_unlock;
}
[..]
set_pte_at_notify(mm, pvmw.address, pvmw.pte, entry);
^^^^^^^ too late release the pin here, the
above already failed
->invalidate_range cannot be used with m...
2018 Dec 20
0
abrt-watch-log -F BUG: WARNING:
...error ?
/usr/bin/abrt-watch-log -F BUG: WARNING: at WARNING: CPU: INFO: possible
recursive locking detected ernel BUG at list_del corruption list_add
corruption do_IRQ: stack overflow: ear stack overflow (cur: eneral
protection fault nable to handle kernel ouble fault: RTNL: assertion failed
eek! page_mapcount(page) went negative! adness at NETDEV WATCHDOG ysctl
table check failed : nobody cared IRQ handler type mismatch Kernel panic -
not syncing: Machine Check Exception: Machine check events logged divide
error: bounds: coprocessor segment overrun: invalid TSS: segment not
present: invalid opcode: alig...
2016 May 30
0
PATCH v6v2 02/12] mm: migrate: support non-lru movable page migration
...signed long)mapping & PAGE_MAPPING_FLAGS)
> + if ((unsigned long)mapping & PAGE_MAPPING_ANON)
> return NULL;
> - return mapping;
> +
> + return (void *)((unsigned long)mapping & ~PAGE_MAPPING_FLAGS);
> }
> +EXPORT_SYMBOL(page_mapping);
>
> /* Slow path of page_mapcount() for compound pages */
> int __page_mapcount(struct page *page)
>
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
...>>> page = pmd_page(orig_pmd);
> > >>>>> + is_anon = PageAnon(page);
> > >>>>> page_remove_rmap(page, true);
> > >>>>> VM_BUG_ON_PAGE(page_mapcount(page) < 0, page);
> > >>>>> VM_BUG_ON_PAGE(!PageHead(page), page);
> > >>>>> } else if (thp_migration_supported()) {
> > >>>>> swp_entry_t entry;
> > >>...
2020 Jun 19
0
[PATCH 13/16] mm: support THP migration to device private memory
..._huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
} else {
struct page *page = NULL;
int flush_needed = 1;
+ bool is_anon = false;
if (pmd_present(orig_pmd)) {
page = pmd_page(orig_pmd);
+ is_anon = PageAnon(page);
page_remove_rmap(page, true);
VM_BUG_ON_PAGE(page_mapcount(page) < 0, page);
VM_BUG_ON_PAGE(!PageHead(page), page);
} else if (thp_migration_supported()) {
swp_entry_t entry;
- VM_BUG_ON(!is_pmd_migration_entry(orig_pmd));
entry = pmd_to_swp_entry(orig_pmd);
- page = pfn_to_page(swp_offset(entry));
+ if (is_device_private_entry(en...
2006 Feb 08
3
[PATCH] direct_remap_pfn_range vm_flags fix
direct_remap_pfn_range() does not properly mark vma with VM_PFNMAP.
This triggers improper reference counting on what rmap thought was
a normal page, and a subsequent BUG() such as:
Eeek! page_mapcount(page) went negative! (-1)
page->flags = 414
page->count = 1
page->mapping = 00000000
------------[ cut here ]------------
kernel BUG at /home/chrisw/hg/xen/xen-unstable/linux-2.6.16-rc2-xen0/mm/rmap.c:555!
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
---
diff -r 859c8d66...
2016 May 30
1
PATCH v6v2 02/12] mm: migrate: support non-lru movable page migration
...G_FLAGS)
> >+ if ((unsigned long)mapping & PAGE_MAPPING_ANON)
> > return NULL;
> >- return mapping;
> >+
> >+ return (void *)((unsigned long)mapping & ~PAGE_MAPPING_FLAGS);
> > }
> >+EXPORT_SYMBOL(page_mapping);
> >
> > /* Slow path of page_mapcount() for compound pages */
> > int __page_mapcount(struct page *page)
> >
>
2016 May 30
1
PATCH v6v2 02/12] mm: migrate: support non-lru movable page migration
...G_FLAGS)
> >+ if ((unsigned long)mapping & PAGE_MAPPING_ANON)
> > return NULL;
> >- return mapping;
> >+
> >+ return (void *)((unsigned long)mapping & ~PAGE_MAPPING_FLAGS);
> > }
> >+EXPORT_SYMBOL(page_mapping);
> >
> > /* Slow path of page_mapcount() for compound pages */
> > int __page_mapcount(struct page *page)
> >
>
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote:
>>>>> Which means after we fix vhost to add the flush_dcache_page after
>>>>> kunmap, Parisc will get a double hit (but it also means Parisc
>>>>> was the only one of those archs needed explicit cache flushes,
>>>>> where vhost worked correctly so far.. so it kinds of proofs your